Skills relacionados:
Python SQL Spark Airflow
$$$ Full time
Automation and Reporting Analyst
  • CloudWalk
  • São Paulo
analyst web fintech cloud

About CloudWalk:

We are not just another fintech unicorn. We are a pack of dreamers, makers, and tech enthusiasts building the future of payments. With millions of happy customers and a hunger for innovation, we're now expanding our neural network - literally and metaphorically.


About the Role:

You will join our reporting team, focused on building automation and reporting solutions that scale across all of CloudWalk’s products. This is not just about data pipelines — you’ll also contribute to the creation of a reporting app, including its infrastructure and a web-based interface. AI will be at the center of everything we do, and you’ll be applying it in every step of development.

We’re looking for someone with strong critical thinking for data, grit to overcome challenges, and an endless curiosity for technology. You will be at the intersection of compliance, product, and engineering, helping us reimagine how reporting and automation can become smarter, faster, and globally scalable.


\n


What You’ll Be Doing:
  • Develop and maintain automation processes for reporting and data workflows.
  • Build and optimize SQL queries to ensure accuracy and scalability.
  • Apply AI in daily development, from automation to anomaly detection and intelligent reporting.
  • Collaborate with teams across the company to ensure reporting solutions serve multiple products and stakeholders.
  • Contribute to the development of our reporting application (infrastructure and webapp).
  • Document processes and continuously improve automation and reporting practices.


What You Need to Succeed:
  • Solid knowledge of SQL and hands-on experience with automation.
  • Strong critical thinking for data validation and problem-solving.
  • Passion for technology, with curiosity and openness to apply AI in practical ways.
  • Grit and perseverance to handle challenges and deliver results.
  • Effective communication and ability to collaborate with multidisciplinary teams.


Nice to Haves:
  • Experience with Kubernetes or applied AI in production environments.
  • Exposure to cloud platforms and containerized infrastructure.
  • Familiarity with web applications or chatbots.
  • Experience in fintech or complex reporting environments.


\n

Join us at CloudWalk, where we’re not just engineering solutions; we’re building a smarter, AI-driven future for payments—together.


By applying for this position, your data will be processed as per CloudWalk's Privacy Policy that you can read here in Portuguese and here in English.



Please mention the word **TRUSTED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
analyst technical software code

About Sayari: 

Sayari is a risk intelligence provider that provides the public and private sectors with immediate visibility into complex commercial relationships by delivering the largest commercially available collection of corporate and trade data from over 250 jurisdictions worldwide. Sayari's solutions enable risk resilience, mission-critical investigations, and better economic decisions. 
 
Headquartered in Washington, D.C., Sayari’s solutions are trusted by Fortune 500 companies, financial institutions, and government agencies, and are used globally in over 35 countries. Funded by world-class investors, with a strategic $228 million investment by TPG Inc. (NASDAQ: TPG) in 2024, Sayari has been recognized by the Inc. 5000 and the Deloitte Technology Fast 500 as one of the fastest growing private companies in the United States and was featured as one of Inc.’s “Best Workplaces” for 2025.

POSITION DESCRIPTION

You will be the technical and mission expert for Sayari's most strategic government partners. You will embed directly with government analysts, operators, and data scientists to solve their hardest mission-enabling, intelligence and/or law enforcement problems. Your primary objective is to ensure that Sayari is deeply integrated into our clients' workflows, becoming an indispensable tool for missions ranging from sanctions evasion and counter-threat finance to securing critical supply chains. This is software engineering on the front lines, placing you at the critical juncture between our technology, our government clients, and their high-stakes missions.

This role is a blend of a software engineer, a data analyst, and a mission consultant. You will be architecting data pipelines or writing production code one day and brief

Please mention the word **ENDEARING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Finance Analyst
  • H1
  • New York
analyst saas system technical

At H1, we believe access to the best healthcare information is a basic human right. Our mission is to provide a platform that can optimally inform every doctor interaction globally. This promotes health equity and builds needed trust in healthcare systems. To accomplish this our teams harness the power of data and AI-technology to unlock groundbreaking medical insights and convert those insights into action that result in optimal patient outcomes and accelerates an equitable and inclusive drug development lifecycle.  Visit h1.co to learn more about us.


The Finance team plays a crucial role in creating that future. It is our role to serve as a liaison between H1’s Commercial & Technical teams to oversee issues related to financial reporting, analysis, forecasting, and planning, as well as resource prioritization and business management. With a deep understanding of the business levers underlying the operations of our Infrastructure team, this team is responsible for helping the business to drive toward clear and effective decisions which are critical to the success of the Company


WHAT YOU'LL DO AT H1

As a Finance Analyst, you’ll be part of a highly visible team that partners with leaders and departments across the company. You’ll support the finance team with quarterly and annual forecasting, expense budgeting, key metrics reporting and analysis, close processes, and variance analysis, while also driving various automation and simplification projects.


- Assist with the preparation of annual budgets and financial forecasts to ensure alignment with the company’s strategic goals and key initiatives

- Support the finance team in reporting and analyzing key metrics such as annual recurring revenue (ARR) and churn

- Provide actionable insights on revenue and collection trends, customer retention and profitability, and other key performance drivers

- Assist with the implementation of variable compensation plans for teams across the organization

- Track and calculate monthly, quarterly, and annual sales commissions in accordance with approved compensation plans

- Support monthly financial presentations for both the executive team and board of director meetings

- Implement scalable processes through automation and process improvement to help strengthen the finance foundation

- Perform ad-hoc analysis on critical business needs


ABOUT YOU

You’re a strong financial data driven analytical professional, with experience in FP&A or strategic finance  for high growth, enterprise B2B SaaS tech, healthcare or marketplace companies. You know how to thrive in a fast-paced and frequently changing environment.


REQUIREMENTS

- 3+ years of experience in a Finance department

- Bachelor’s  degree in Finance, Accounting, or a related major field (MBA is a plus)

- Experience in B2B SaaS financial modeling is a plus

- Advanced skills in Microsoft Excel and PowerPoint (Google Sheets and Slides experience is a plus)

- Excellent communication skills with the ability to interact directly with people at all levels of the organization

- Ability to meet deadlines while working in a fast paced environment

- Advanced system skills and the ability to learn new systems quickly.

- Strong attention to detail and ability to effectively prioritize tasks



COMPENSATION

This role pays $75,000 to $88,000 per year, based on experience, in addition to stock options.


Anticipated role close date: 01/10/2026


\n


\n

H1 OFFERS

- Full suite of health insurance options, in addition to generous paid time off

- Pre-planned company-wide wellness holidays

- Retirement options

- Health & charitable donation stipends

- Impactful Business Resource Groups

- Flexible work hours & the opportunity to work from anywhere

- The opportunity to work with leading biotech and life sciences companies in an innovative industry with a mission to improve healthcare around the globe



H1 is proud to be an equal opportunity employer that celebrates diversity and is committed to creating an inclusive workplace with equal opportunity for all applicants and teammates. Our goal is to recruit the most talented people from a diverse candidate pool regardless of race, color, ancestry, national origin, religion, disability, sex (including pregnancy), age, gender, gender identity, sexual orientation, marital status, veteran status, or any other characteristic protected by law.

 

H1 is committed to working with and providing access and reasonable accommodation to applicants with mental and/or physical disabilities. If you require an accommodation, please reach out to your recruiter once you've begun the interview process. All requests for accommodations are treated discreetly and confidentially, as practical and permitted by law.



Please mention the word **DISTINCTION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
GTM Analytics Engineer
  • Stedi
  • Remote
saas founder architect recruiter

We're building a new healthcare clearinghouse

In the healthcare sector, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) requires that all insurance payers exchange transactions such as claims, eligibility checks, prior authorizations, and remittances using a standardized EDI format called X12 HIPAA. A small group of legacy clearinghouses process the majority of these transactions, offering consolidated connectivity to carriers and providers.

Stedi is the world's only programmable healthcare clearinghouse. By offering modern API interfaces alongside traditional real-time and batch EDI processes, we enable both healthcare technology businesses and established players to exchange mission-critical transactions. Our clearinghouse product and customer-first approach have set us apart. Stedi was ranked as Ramp’s #3 fastest-growing SaaS vendor.

Stedi has lightning in a bottle: engineers and designers shipping products week in and week out; a lean business team supporting the company’s infrastructure; passion for automation and eliminating toil; $92 million in funding from top investors like Stripe, Addition, USV, Bloomberg Beta, First Round Capital, and more. To learn more about how we work, watch our founder Zack’s interview with First Round Capital.

What we’re looking for

We’re hiring a full-stack data and analytics engineer to build and own the data foundation that will power our daily GTM operations: revenue analytics, product usage telemetry, CRM data quality, attribution, funnel performance, and forecasting.

This is not a typical business analyst position. You will architect the pipelines, models, and automations that ensure our GTM teams have reliable, real-time insights into how customers discover, adopt, and expand with Stedi and our products. You will work closely with Sales, GTM Ops, Product, and Finance, executing data and analytics engineering workstreams, and conducting hands-on analysis to build the source-of-truth data for our GTM operations.

What you'll do

  • Build and maintain GTM data pipelines: Own ingestion, transformation, and syncing of CRM data (HubSpot), product-usage telemetry, billing data, and third-party enrichment data in Redshift to support GTM analytics workstreams.

  • Develop core GTM & revenue data models: Improve operational efficiency through standardization of datasets for Sales, GTM Ops, Finance, and the executive team, while establishing common metric definitions across revenue, customer segments, and more.

  • Ship dashboards, alerts, and decision-making tools: Improve telemetry into business performance by building dashboards to track things like sales funnel performance and pipeline quality. Better inform GTM leadership through automation of weekly/monthly reporting and establishing a revenue forecast.

  • Investigate trends and build models to support sales. Accelerate sales effectiveness through implementation of alerting for critical events (e.g. pipeline drops, usage contractions, stuck deals, missed lifecycle transitions), conducting key analyses (e.g. pipeline velocity, win rates, segmentation performance), and development of GTM models (e.g. ICP scoring, account prioritization, churn risk).

  • Own the GTM analytics roadmap: Work with GTM leadership to maintain a backlog of GTM analytics engineering work. Proactively identify the next set of capabilities the GTM org needs (forecasting, routing logic, new usage signals, etc).

Who you are

  • You have exceptional analytical skills: You’ve made a career in working with data to improve products and overall business operations. You know the tools, best practices, and playbooks necessary to stand up a high-performing and organized analytics function at the company.

  • You know the tech stack: You write efficient SQL queries to analyze large datasets and can work with complex schemas. You're an expert with data visualization tools like Tableau, QuickSight, or Power BI. Familiarity with cloud environments (AWS, Azure, GCP).

  • You create and execute your own work: You notice patterns others miss and dig deep to understand root causes. You've identified data issues or operational inefficiencies that led to meaningful improvements.

  • You do what it takes to get the job done: You are resourceful, self-motivating, self-disciplined, and don’t wait to be told what to do. You put in the hours.

  • You move quickly: We move quickly as an organization. This requires an ability to match our pace and not get lost by responding with urgency (both externally to payers and internally to stakeholders), communicating what you are working on, and proactively asking for help or feedback when you need it.

  • You are a “bottom feeder”: You thrive on the details. No task is too small in order to find success, generate revenue, and improve our costs.

The annual compensation range for this role is $180,000-$230,000. For roles with a variable component, the range provided is the role’s On Target Earnings ("OTE") range, which means that the range is inclusive of the sales commissions or bonus target and annual base salary. This range may be inclusive of multiple experience levels at Stedi and will be narrowed during the interview process based on a number of factors, including the candidate’s experience, location, and qualifications. Please reach out to your recruiter with any questions.

We’ve been made aware of individuals impersonating the Stedi recruiting team. Please note:

  • All official communication about roles at Stedi will only come from an @stedi.com email address.

  • If you’re unsure whether a message is legitimate or have any concerns, feel free to contact us directly at careers@stedi.com.

We appreciate your attention to this and your interest in joining Stedi.

At Stedi, we're looking for people who are deeply curious and aligned to our ways of working. You're encouraged to apply even if your experience doesn't perfectly match the job description.



Please mention the word **LOGICAL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • ChowNow
  • Remote
support mobile senior sales
ABOUT US: ChowNow is one of the leading players in off-premise restaurant technology. As takeout becomes a vital revenue stream for independent restaurants, our platform helps owners focus on what they do best—serving great food—by offering solutions across the entire digital dining experience. From building branded websites and mobile apps, to powering online orders, managing menus, consolidating delivery, and running targeted marketing, we give restaurants the tools to grow on their own terms. We support over 20,000 restaurants across North America, helping process $1B+ in gross food sales while saving our partners over $700M in third-party commission fees. Through our white-label ordering solutions, a growing demand network (including Google, Yelp, Apple, and Snap), and a diner-friendly marketplace, we empower independent restaurants to own their customer relationships and avoid inflated pricing and fees charged by 3rd party delivery apps like Uber and Doordash. Founded in 2012.

Please mention the word **SWANKY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Machine Learning Engineer
  • Fetch
  • United States
software mobile senior engineer

What we're building and why we're building it. 

Every month, millions of people use Fetch earning rewards for buying brands they love, and a whole lot more. Whether shopping in the grocery aisle, grabbing a bite at the drive-through or playing a favorite mobile game, Fetch empowers consumers to live rewarded throughout their day. To date, we've delivered more than $1 billion in rewards and earned more than 5 million five-star reviews from happy users. 

It's not just our users who believe in Fetch: with investments from SoftBank, Univision, and Hamilton Lane, and partnerships ranging from challenger brands to Fortune 500 companies, Fetch is reshaping how brands and consumers connect in the marketplace. When you work at Fetch, you play a vital role in a platform that drives brand loyalty and creates lifelong consumers with the power of Fetch points. User and partner success are at the heart of everything we do, and we extend that same commitment to our employees.

At Fetch, we value curiosity, adaptability, and the confidence to explore new tools, especially AI, to drive smarter, faster work. You don't need to be an expert, but you should be ready to learn quickly and think critically. We welcome learners who move fast, challenge the status quo, and shape what's next, with us.  Ranked as one of America's Best Startup Employers by Forbes for two years in a row, Fetch fosters a people-first culture rooted in trust, accountability, and innovation. We encourage our employees to challenge ideas, think bigger, and always bring the fun to Fetch.

Fetch is an equal employment opportunity employer.

About the Role:

We are seeking a Machine Learning Software Engineer to join Fetch's Scan, Match & Catalog team. This role sits at the intersection of applied machine learning, data engineering, and production systems, with a focus on improving receipt understanding, product matching, and catalog enrichment at scale. You w

Please mention the word **FASHIONABLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Senior Data Engineer
  • Exadel
  • Brazil, Bulgaria, Colombia, Georgia, Lithuania, Poland, Romania
jira salesforce code web

Why Join Exadel 

We’re an AI-first global tech company with 25+ years of engineering leadership, 2,000+ team members, and 500+ active projects powering Fortune 500 clients, including HBO, Microsoft, Google, and Starbucks.

From AI platforms to digital transformation, we partner with enterprise leaders to build what’s next.
What powers it all? Our people are ambitious, collaborative, and constantly evolving.

About the Client  

A U.S.-based education services provider offering online and campus-based post-secondary education, primarily serving military personnel, veterans, and public service communities. The organization delivers degree and certificate programs across disciplines such as nursing, health sciences, business, IT, and liberal arts. In addition to its headquarters in West Virginia, the customer operates facilities and partner institutions across the United States. The primary product areas to work with are learning management systems, student enrollment, and academic operations on web and mobile platforms.

What You’ll Do  

  • Design, implement, and maintain scalable data pipelines using Snowflake, Coalesce.io, Airbyte, and SQL Server/SSIS, with some use of Azure Data Factory
  • Build and maintain dimensional data models to ensure high-quality, structured data for analytics and reporting
  • Implement Medallion architecture in Snowflake, managing bronze, silver, and gold layers
  • Collaborate with teams using Jira for task tracking and GitHub for code repository management
  • Ensure reliable ETL processes, data transformations, and data integration workflows
  • Help improve data modeling practices and address weaknesses in dimensional modeling

What You Bring  

  • Hands-on experience with Snowflake, Coalesce.io, Airbyte, SQL Server/SSIS, and Azure Data Factory
  • Strong understanding of Medallion architecture and dimensional data modeling
  • Practical experience in building ETL pipelines and transforming data for analytics
  • Familiarity with Jira and GitHub for collaborative work
  • Strong analytical and problem-solving skills, with ability to collaborate across teams
  • Minimum 4-hour overlap with US Eastern Time

Nice to Have

Exposure to Power BI (optional)Experience with Salesforce data integrationBackground in higher education / ed-tech domains

English level 

Intermediate/Upper-Intermediate

Legal & Hiring Information 

  • Exadel is proud to be an Equal Opportunity Em

    Please mention the word **EXALTATION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Analytics Engineer
  • Alpaca
  • Remote - North America
crypto technical support financial

Who We Are:

Alpaca is a US-headquartered self-clearing broker-dealer and brokerage infrastructure for stocks, ETFs, options, crypto, fixed income, 24/5 trading, and more. Our recent Series C funding round brought our total investment to over $170 million, fueling our ambitious vision.

Amongst our subsidiaries, Alpaca is a licensed financial services company, serving hundreds of financial institutions across 40 countries with our institutional-grade APIs. This includes broker-dealers, investment advisors, wealth managers, hedge funds, and crypto exchanges, totalling over 6 million brokerage accounts.

Our global team is a diverse group of experienced engineers, traders, and brokerage professionals who are working to achieve our mission of opening financial services to everyone on the planet. We're deeply committed to open-source contributions and fostering a vibrant community, continuously enhancing our award-winning, developer-friendly API and the robust infrastructure behind it.

Alpaca is proudly backed by top-tier global investors, including Portage Ventures, Spark Capital, Tribe Capital, Social Leverage, Horizons Ventures, Unbound, SBI Group, Derayah Financial, Elefund, and Y Combinator.

 

Our Team Members:

We're a dynamic team of 230+ globally distributed members who thrive working from our favorite places around the world, with teammates spanning the USA, Canada, Japan, Hungary, Nigeria, Brazil, the UK, and beyond!

We're searching for passionate individuals eager to contribute to Alpaca's rapid growth. If you align with our core values—Stay Curious, Have Empathy, and Be Accountable—and are ready to make a significant impact, we encourage you to apply.

About the Role:

We are seeking an Analytics Engineer to own and execute the vision for our data transformation layer. You will be at the heart of our data platform, which processes hundreds of millions of events daily from a wide array of sources, including transactional databases, API logs, CRMs, payment systems, and marketing platforms.

You will join our 100% remote team and work closely with Data Engineers (who manage data ingestion) and Data Scientists and Business Users (who consume your data models). Your primary responsibility will be to use dbt and Trino on our GCP-based, open-source data infrastructure to build robust, scalable data models. These models are critical for stakeholders across the company—from finance and operations to the executive team—and are delivered via BI tools, reports, and reverse ETL systems.

What You'll Do:

  • Own the Transformation Layer: Design, build, and maintain scalable data models using dbt and SQL to support diverse business needs, from monthly financial reporting to near-real-time operational metrics.
  • Set Technical Standards: Establish and enforce best practices for data modelling, development, testing, and monitoring to ensure data quality and reliability.


Please mention the word **AGILE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Software Engineer Trading Infrastructure
  • Gauntlet
  • New York City / San Francisco / Los Angeles / Remote
software design web3 defi

Gauntlet leads the field in quantitative research and optimization of DeFi economics. We manage market risk, optimize growth, and ensure economic safety for protocols facilitating most spot trading, borrowing, and lending activity across all of DeFi, protecting and optimizing the largest protocols and networks in the industry. We build institutional-grade vaults for decentralized finance, delivering risk-adjusted onchain yields for capital at scale. Designed by the most vigilant, quantitative minds in crypto and informed by years of research.


As of November 2025, Gauntlet manages over $2B in vault TVL, and optimizes risk and incentives covering over $42 billion in customer TVL. We continually publish cutting-edge research that informs our risk models, alerts, and analysis, and is among the most cited institutions — including academic institutions — in terms of peer-reviewed papers addressing DeFi as a subject. We’re a Series B company with around 75 employees, operating remote-first with a home base in New York City.


As a company, we build institutional-grade vaults that deliver risk-adjusted DeFi yields at scale, powered by automated risk models and off-chain intelligence. Gauntlet curates strategies across Morpho, Drift, Symbiotic, Aera and more, with >$2B in vault TVL and a growing suite of Prime, Core and Frontier vaults.


Our mission is to drive adoption and understanding of the financial systems of the future. We operate with a trader’s discipline and a risk manager’s skepticism: size carefully, stress routinely, unwind decisively. The label equals the package equals the contents. No surprises, just predictable, reliable vaults.


Join our derivatives trading team and work on the key infrastructure that powers our product offering as well as trading systems. Work with a team with decades of experience in tech and finance to build the backbone of our high-performance derivatives trading strategies. You'll work close to trading, own critical infrastructure end-to-end, and ship systems that manage real capital in live crypto markets.

\n


Responsibilities
  • Design, implement, and operate scalable distributed systems in production.
  • Build low-latency and streaming systems for real-time and near real-time workloads.
  • Develop data pipelines and ETL workflows for ingesting, transforming, and serving data.
  • Build and maintain application services and APIs used by internal and external systems.
  • Implement Web3 protocol integrations, including smart contract interactions and on-chain data ingestion via RPCs, logs, and indexers.
  • Apply SRE principles to improve reliability, observability, and operational correctness.
  • Participate in incident response, debugging production issues and driving root-cause fixes.
  • Contribute to system design and code reviews, maintaining high engineering standards.
  • Leverage AI-assisted development tools to improve productivity, code quality, and system understanding, while exercising strong engineering judgment.
  • Write and maintain technical documentation for systems and workflows.


Qualifications
  • 6+ years of professional software engineering experience.
  • Strong proficiency in Python, Rust, and/or JavaScript/TypeScript.
  • Experience building low-latency or high-throughput systems.
  • Experience designing and operating scalable distributed systems.
  • Hands-on experience with Web3 systems, including interacting with smart contracts and consuming on-chain data.
  • Experience with streaming or messaging systems (e.g. Kafka, Pub/Sub).
  • Experience with data storage systems (e.g. Postgres, ClickHouse).
  • Experience deploying and operating software in cloud environments (e.g. GCP).
  • Familiarity with containerized systems (Docker, Kubernetes).
  • Understanding of SRE practices, including monitoring, alerting, and incident response.
  • Strong understanding of security fundamentals (authentication, authorization, secrets management).


Bonus Points
  • Previous experience at financial or trading firms.
  • Smart contract development experience (e.g. Solidity).
  • Experience with workflow orchestration (e.g. Dagster).
  • Experience operating systems with strict reliability or performance requirements.
  • Exposure to infrastructure as code or CI/CD systems.


Benefits and Perks
  • Remote first - work from anywhere in the US & CAN!
  • Competitive packages with the added opportunity for incentive-based compensation
  • Regular in-person company retreats and cross-country "office visit" perk
  • 100% paid medical, dental and vision premiums for employees
  • Laptop provided
  • $1,000 WFH stipend upon joining
  • $100 per month reimbursement for fitness-related expenses
  • Monthly reimbursement for home internet, phone, and cellular data
  • Unlimited vacation policy
  • 100% paid parental leave of 12 weeks
  • Fertility benefits


\n
$185,000 - $225,000 a year
\n

Please note at this time our hiring is reserved for potential employees who are able to work within the contiguous United States and Canada. Should you need alternative accommodations, please note that in your application.


The national pay range for this role is $165,000 - $205,000 plus additional On Target Earnings potential by level and equity in the company. Our salary ranges are based on paying competitively for a company of our size and industry, and are one part of many compensation, benefits and other reward opportunities we provide. Individual pay rate decisions are based on a number of factors, including qualifications for the role, experience level, skill set, and balancing internal equity relative to peers at the company.  


#LI-Remote



Please mention the word **CONSUMMATE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Principal Data Operations & Migration Lead
  • StarCompliance
  • York, United Kingdom
technical support software financial

About StarCompliance

StarCompliance is on a mission to make compliance simple and easy. Trusted globally by enterprise financial institutions, the user-friendly STAR platform empowers organizations to achieve regulatory compliance while safeguarding their integrity and business reputations. Through a customizable, 360-degree view of employee activity, the STAR software enables firms to automate the detection and resolution of potential areas of conflict while streamlining daily workflows and increasing efficiency. 


Role  

StarCompliance is looking for a senior, hands-on Data Operations & Migration Specialist to oversee our data feed operations and client data migration capabilities. This role combines technical leadership with day-to-day delivery, acting as a player coach who sets direction, unblocks issues, and still gets hands-on when it matters.


You will own the operational health of broker and client data feeds, lead complex data migration initiatives during client onboarding, and provide mentorship and technical guidance to engineers and analysts across both functions. Deep domain knowledge in financial services data, particularly regulated trading, transaction, or reference data, is critical. 


This role sits within the Enterprise Data function and works closely with R&D, Client Support Services, Professional Services, and Relationship Management to ensure client data is secure, accurate, compliant, and delivered on time. 

\n


Responsibilities
  • Leadership Responsibilities 
  • Provide technical and operational leadership across Data Operations and Data Migration functions. 
  • Act as a player coach, balancing hands-on delivery with coaching, mentoring, and upskilling team members. 
  • Set standards for operational excellence, data quality, documentation, and incident management. 
  • Own prioritisation and workload planning across feeds and migrations, ensuring delivery commitments are met. 
  • Serve as the escalation point for complex data issues, client escalations, and high-risk migrations. 
  • Partner with Product, Engineering, and Professional Services to influence roadmap decisions and onboarding strategies.  
  • Act as a trusted technical partner for internal teams and external stakeholders during onboarding and operational change. 
  • Translate complex technical and data concepts into clear, actionable guidance for non-technical audiences. 
  • Contribute to client-facing discussions where deep data or feed expertise is required. 

  • Data Feed Operations Ownership 
  • Oversee the delivery, maintenance, and evolution of StarCompliance’s broker and client data feed infrastructure. 
  • Ensure secure setup and ongoing management of SFTP connectivity, access permissions, and encryption standards. 
  • Own operational monitoring of daily and intraday feeds, proactively identifying trends, risks, and failure patterns. 
  • Drive continuous improvement across feed automation, resilience, monitoring, and alerting. 
  • Work closely with the wider Enterprise Data engineering team on feed-related enhancements and defect resolution. 
  • Ensure platforms such as MoveIt and associated automation tooling are stable, well configured, and fit for scale. 

  • Data Migration Leadership 
  • Oversee the planning and execution of complex data migrations from third-party vendors into StarCompliance products. 
  • Define and review migration strategies, data mappings, validation approaches, and cutover plans. 
  • Ensure data integrity, accuracy, and regulatory compliance throughout the migration lifecycle. 
  • Provide hands-on support for data analysis, transformation, and validation where required. 
  • Oversee post-migration support, ensuring issues are resolved quickly and root causes addressed. 


Skills & Experience
  • Strong experience in financial services, fintech, regtech, or similarly regulated data environments.
  • Deep domain knowledge of financial broker feeds, file-based integrations, and operational data pipelines.
  • Hands-on experience with SQL Server, including T-SQL for investigation and data validation.
  • Strong understanding of ETL processes and tooling.
  • Experience with secure file transfer technologies and encryption standards, including SFTP, PGP/GPG, and SSH.
  • Proficiency in scripting and automation using tools such as PowerShell, Python, and SQL.
  • Proven experience leading data operations or data migration initiatives in production environments.
  • Ability to balance strategic thinking with hands-on delivery.
  • Excellent problem-solving skills and calm decision-making under pressure. 


Minimum Qualifications
  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or equivalent professional experience.  
  • Proven leader with 5+ years in data operations, data engineering, data migration, or related technical roles, ideally within financial services or compliance technology. 


How We Think About AI..
  • At StarCompliance, AI is not a side experiment or a specialist niche. We treat it as a practical capability that strengthens how we operate, scale, and deliver secure, high quality data services. 

  • In Enterprise Data, we expect senior leaders to: 
  • Use AI assisted tools to improve operational efficiency. 
  • Stay informed about how AI can enhance data operations, migration strategy, and automation in regulated environments. 
  • Apply AI thoughtfully, with strong awareness of data security, client confidentiality, regulatory risk, and cost. 
  • Help the team adopt AI responsibly in day-to-day operations, without compromising control, traceability, or compliance standards. 


\n

StarCompliance Background Checks


All positions require pre-employment screening due to employees potentially having access to highly sensitive and confidential information involving finance and compliance; candidates must be trustworthy and have a heightened sensitivity to protecting confidential financial, professional information.  To be eligible for employment with StarCompliance, candidates must undergo a rigorous background investigation with checks including, but not limited to, criminal record history, consumer credit, employment history, qualifications, and education checks.  



Equal Opportunity Employer Statement


We prohibit discrimination and harassment of any kind based on race, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, gender identity or expression, marital/civil union/domestic partnership status, veteran status or any other protected characteristic as outlined by country, state, or local laws.


This policy applies to all employment practices within our organisation, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. StarCompliance makes hiring decisions based solely on qualifications, merit, and business needs at the time. For more information, please request a copy of our Equal Opportunities Policy.




Please mention the word **CAPTIVATING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
CFO
  • Marathon Talent
  • Remote
cfo support software accounting

Offshore CFO (Multifamily Real Estate) — Job Description

Overview

We are hiring a CFO to lead the finance and accounting function for a U.S.-based multifamily owner/operator. This role owns

financial statements, monthly close, cash management, budgeting/forecasting, reporting, and controls across multiple

properties and entities. The right candidate is tech-forward and excited to modernize finance through automation, AI, and APIdriven integrations.

Key Responsibilities

• Monthly close & financial statements: Own timely, accurate close and delivery of P&L, balance sheet, and cash flow

with supporting schedules.

• Reconciliations & controls: Ensure complete bank/GL reconciliations, AR/AP tie-outs, accruals, prepaids, CIP/fixed

assets, intercompany, and documented processes.

• Management reporting: Produce property/portfolio reporting including budget vs. actual, variance explanations, and

key operating KPIs.

• Cash management: Maintain daily cash visibility and a rolling 13-week cash forecast; manage payment cadence,

approvals, reserves, and liquidity planning.

• Budgeting & forecasting: Lead annual budgets and reforecasts (revenue, payroll, utilities, repairs, insurance, taxes,

CapEx).

• CapEx / renovation tracking: Track project budgets, spend, and ROI support (CIP and unit-level economics as

applicable).

• Lender / compliance support: Manage covenant reporting, lender deliverables, and coordination with CPAs/tax/audit

teams.

• Section 8 / Housing Authority & municipal compliance: Support affordable housing reporting and compliance (as

applicable), including coordination with Housing Authorities/cities, audits, and required documentation.

• Team leadership: Lead and develop offshore accounting staff (AP/AR/accountants); set SOPs, close calendar, and

review standards.

• Tech/automation leadership: Implement and optimize workflows using AI tools, automation, and API connections

across property management, accounting, reporting, and data pipelines.

Requirements (Must-Have)

• Minimum 8+ years of experience as a CFO (or senior finance leader) in real estate; multifamily strongly preferred.

• Expert in financial statements, close management, reconciliations, cash forecasting, and internal controls.

• Strong ability to deliver decision-ready reporting (budget vs. actual, variance analysis, KPIs).

• Bilingual proficiency: fluent professional English and Spanish (written and spoken).

• Property management software experience; ResMan preferred.

• Expense management software experience with Brex or Ramp; Brex preferred.

• Experience working with Section 8 programs, Housing Authorities, and municipal/city requirements (as applicable),

including compliance reporting and audit support.

• Strong understanding of real estate legal entities and structures (LLCs/LPs/SPVs), intercompany accounting, and

entity-level reporting.

• Tech-forward mindset: comfortable implementing automation/AI and working with APIs/integrations (no coding

required, but must be fluent with modern tools).

• Advanced Excel/Google Sheets skills; comfortable building standardized reporting templates and dashboards.

• Ability to work offshore with consistent overlap with U.S. business hours and days (ET/CT preferred).

Preferred

• Multi-entity consolidation, lender compliance/covenants, and renovation-heavy portfolios.

• Experience with BI/reporting tools (Power BI/Tableau) and modern AP/bill pay tools.

Working Model

• Remote / Offshore (LATAM preferred for timezone overlap)

• Reports to Ownership/CEO/Managing Partner; partners closely with Operations and Asset Management



Please mention the word **COMPLIANT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Director Data Engineering
  • Revinate
  • Atlanta, GA
director design hr security

Revinate is one of the largest and most innovative providers of direct revenue-generating solutions in the hospitality industry. Revinate's mission is to deliver hoteliers scalable direct revenue and profits from data-driven solutions that cultivate deeper relationships with guests. Revinate’s Direct Booking Platform helps capture, convert and retain guests with strategies and services that maximize direct booking revenue. This combination maximizes the lifetime value of each guest through personalized and targeted campaigns across the guest journey. Revinate Marketing has won 1st place for Hotel CRM & Email Marketing in the HotelTechAwards five years in a row!


About Us


Revinate is an innovative hospitality tech company that is revolutionizing how customers manage their operations and enhance the guest experience. Our solutions leverage advanced technology, data analytics, and automation to improve efficiency and drive customer happiness in the hospitality industry.  


The Opportunity


We are seeking an experienced and visionary Director, Data Engineering to lead our Data Platform initiatives. In this critical role, you will be responsible for defining the strategy, architecture, and execution of our end-to-end data ecosystem, encompassing data ingestion pipeline, operational data stores, our evolving data lakehouse, and robust data APIs. You will build and lead a high-performing team of data engineers, fostering a culture of innovation, collaboration, and operational excellence. This role requires not only deep technical expertise but also a strong understanding of how data can drive business value, including leveraging data science and machine learning to optimize our operations.


Key Responsibilities


Strategic Leadership: Define and execute the long-term vision and roadmap for our data platform, aligning with overall business objectives and technology strategy.


Team Leadership & Development: Recruit, mentor, and lead a talented team of data engineers, fostering their growth and ensuring best practices in data engineering.


Data Pipeline: Oversee the design, development, and maintenance of scalable and reliable real time data ingestion pipeline, ensuring data quality, accuracy, and timely delivery.


Operational Data Stores: Lead the architecture and management of our operational data stores, optimizing for performance, reliability, and accessibility to support critical business applications.


Data Lakehouse Development: Drive the strategic evolution and implementation of our data lakehouse, enabling unified data access, advanced analytics, and machine learning initiatives.


Data API Development: Champion the design and development of secure, performant, and well-documented data APIs to facilitate data consumption across various applications and user groups.


Data Governance & Quality: Enforce data governance policies, standards, and procedures to ensure data integrity, security, privacy, and compliance.


Operational Efficiency through Data Science/ML: Collaborate closely with data science and analytics teams to identify opportunities where data science and machine learning can be applied to optimize internal operations, automate processes, and improve efficiency within the data platform itself (e.g., predictive maintenance for pipelines, intelligent resource allocation).


Performance & Scalability: Ensure the data platform is highly performant, scalable, and resilient, capable of handling growing data volumes and complex analytical workloads.


Technology Evaluation: Evaluate and recommend new data technologies, tools, and platforms to enhance our data capabilities and stay ahead of industry trends.


Cross-Functional Collaboration: Partner effectively with engineering, product, analytics, data science, and business teams to understand data requirements and deliver impactful solutions.


Monitoring & Support: Establish robust monitoring, alerting, and on-call support processes for all data systems, ensuring high availability and rapid issue resolution.

\n


What You’ll Bring
  • 10+ years of experience in data engineering roles, with at least 5 years in a leadership or management position overseeing data engineering teams.
  • Proven track record of building and scaling complex data platforms from the ground up, or significantly evolving existing ones.

Deep expertise in designing, building, and operating:
  • Data Ingestion Pipelines: (e.g., Kafka, Flink, Spark Streaming, Airflow, equivalent cloud services like Kinesis, Pub/Sub, Dataflow)
  • Operational Data Stores: (e.g., Cassandra, ScyllaDB, DynamoDB, PostgreSQL, MySQL)
  • Data Warehousing/Lakehouse Technologies: (e.g., AWS, GCP, S3, Iceberg, Redshift, BigQuery)
  • Data APIs & Services: (e.g., RESTful APIs, GraphQL)

  • Strong proficiency in Java / ScalaExtensive experience with cloud data platforms (AWS, GCP) and their respective data services.
  • Solid understanding of data modeling techniques (relational, dimensional, NoSQL).
  • Literacy in Data Science and Machine Learning concepts:Familiarity with common ML algorithms and their applications.
  • Understanding of the MLOps lifecycle and data requirements for ML models.Ability to identify and articulate how data science/ML can be used to improve data platform operations (e.g., anomaly detection in pipelines, resource optimization).
  • Experience with implementing data governance, data quality, and metadata management tools and practices.
  • Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences.
  • Strong analytical and problem-solving abilities, with a focus on delivering practical and scalable solutions.
  • Bachelor's or Master's degree in Computer Science, Data Engineering, or a related quantitative field.


Benefits
  • Health insurance-employee premium paid 100% by Revinate
  • Dental insurance-employee and dependents’ premium paid 100% by Revinate
  • Vision insurance-employee and dependents’ premium paid 100% by Revinate
  • 401(k) with employer match
  • Short & Long Term Disability insurance
  • Life insurance
  • Paid Flex time off
  • Monthly work from home stipend
  • Telehealth access
  • Employee Assistance Program (EAP)


\n
$190,000 - $200,000 a year
The compensation package for the Director, Data Engineering includes a base salary and a performance-based bonus.

This salary range may be inclusive of several career levels at Revinate and will be narrowed during the interview process based on a number of factors, including (but not limited to) the candidate’s experience, qualifications and location. 
\n

Interview Process 

We're excited you're considering a career with Revinate! Our goal is to ensure this is the right opportunity for you, while also determining if you're the right fit for our team. The interview process for this role is designed to be a two-way street, where you'll get to know us just as we get to know you.


 - Recruiter Screen - 30 min

 - Technical Interview - 60 min

 - Cross Functional Interview - 30 min

 - Final Interview - 30 min 




Revinate values the flexibility of a remote workforce and the benefits of localized hiring. We focus on specific cities to foster local communities and enhance team cohesion, allowing employees to collaborate, attend local events, and build a strong sense of community and company culture.

Candidates must be located in the city listed in the job application. Thank you!


Revinate is not open to third party solicitation or resumes for our posted FTE positions. Resumes received from third party agencies that are unsolicited will be considered complementary.



Important Security Alert

We have been made aware of fraudulent activities involving individuals impersonating our HR team and offering fake job opportunities. Please be vigilant and ensure your safety by verifying all job offers.


For Authentic Opportunities: Only refer to our official careers page on our company website. Your security is our priority. If you encounter any suspicious activity, please report it immediately. Stay safe and secure! You can confirm or inquire with any questions by reaching out to recruiting@revinate.com





AI and Hiring 

Please note that interviews at Revinate will be recorded using brighthire.ai. As we continue to build more structure into our interview processes -- the best way to eliminate unconscious bias! We are encouraging our interviewers to focus more on our candidates and the conversation than taking notes. Instead, we can rely on brighthire.ai to do the note taking for us. If you’re uncomfortable with recording your interview, please let us now. We’ll opt you out.   


Excited?!  Want to learn more? Apply Now!

Our Core Values:

One Revinate - United & Strong, on a single mission together

Built on Trust - It’s the foundation of everything we do

Expect Amazing - We think, dream & deliver big

Customer Love -- When the customer wins, we win

Make it Simpler -- Apply it to everything we do

Hungerness -- Feel it, follow it, be relentless about our success

Grounded in Gratitude - We’re glad to be here & make the most of every day


Revinate Inc. provides Equal Employment Opportunity to all employees and applicants for employment without regard to race, color, religion, gender identity or expression, sex, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Revinate complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. 


Revinate is not open to third party solicitation or resumes for our posted FTE positions. Resumes received from third party agencies that are unsolicited will be considered complementary. 


If you are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to recruiting@revinate.com.


By submitting your application you acknowledge that you have read Revinate's Privacy Policy (https://www.revinate.com/privacy/)




Please mention the word **HONORABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Intern Software Development
  • Netomi
  • Remote - India
software design technical code

About the Company:

Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.


Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!


Job description


We are looking for a Software Development Intern to help us with coding, fixing, executing and versioning existing code for applications. If you're passionate to solve real time fundamental problems, explore, learn and work on technologies out of scope, Netomi is the perfect place for you.

\n


Job Responsibilities
  • Assist in planning, design and execution of SOA backend platforms. Mostly around REST based Web Frameworks using JAVA (Spark,Spring, ORM)
  • High level and Low level design of the highly scalable components
  • Works collaboratively in a multi-disciplinary team environment
  • Assist key technical advisors to define the roadmap of project


Requirements
  • Experience on some scripting language for automated build/ deployments, preferably Java
  • Pursuing B.E./B.Tech in Computer Science from tier I & II institutes (2025 and 2026 passouts only)


\n

Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.



Please mention the word **MERRY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Intern Software Development
  • Netomi
  • Remote - India
software design technical code

About the Company:

Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.


Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!


Job description


We are looking for a Software Development Intern to help us with coding, fixing, executing and versioning existing code for applications. If you're passionate to solve real time fundamental problems, explore, learn and work on technologies out of scope, Netomi is the perfect place for you.

\n


Job Responsibilities
  • Assist in planning, design and execution of SOA backend platforms. Mostly around REST based Web Frameworks using JAVA (Spark,Spring, ORM)
  • High level and Low level design of the highly scalable components
  • Works collaboratively in a multi-disciplinary team environment
  • Assist key technical advisors to define the roadmap of project


Requirements
  • Experience on some scripting language for automated build/ deployments, preferably Java
  • Pursuing B.E./B.Tech in Computer Science from tier I & II institutes (2025 and 2026 passouts only)


\n

Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.



Please mention the word **FLEXIBLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Lalamove
  • Kuala Lumpur
technical support java senior

At Lalamove, we believe in the power of community. Millions of drivers and customers use our technology every day to connect with one another and move things that matter. Delivery is what we do best and we ensure it is always fast and simple. Since 2013, we have tackled the logistics industry head on to find the most innovative solutions for the world’s delivery needs. We are full steam ahead to make Lalamove synonymous with delivery and on a mission to impact as many local communities we can. We have massively scaled our efforts across Asia and now have our sights on taking our best in class technology to the rest of the world. And we are looking for talented professionals to join us in this journey!!


As a Senior Data Engineer at Lalamove, you will be joining the global Data team as a key member of our expanding technology team in our new market. Due to the importance of user privacy and our commitment to compliance laws, we need an additional engineer to support our operations in the expanding market, while collaborating closely with our global engineering team.


\n


What you'll do:
  • Provide production support and incident response of our data in expanding market platform.
  • Support and troubleshoot technical issues, including the data pipelines running on top of the data platform.
  • Collaborate with a geographically-dispersed team of engineers to support compliance for the expanding market.
  • Support ad hoc requests related to expanding market data and operations.


What you'll need:
  • Legally permitted to work in Malaysia
  • 5+ years of relevant experience in data engineering
  • Experience in supporting Big Data operations
  • Proficiency in SQL
  • Hands-on experience in linux systems and command line operations
  • Experience in Java and Spring Boot framework
  • Good command of English, fluency in Mandarin is a plus


\n

To all candidates- Lalamove respects your privacy and is committed to protecting your personal data.

This Notice will inform you how we will use your personal data, explain your privacy rights and the protection you have by the law when you apply to join us. Please take time to read and understand this Notice. Candidate Privacy Notice: https://www.lalamove.com/en-hk/candidate-privacy-notice



Please mention the word **DASHING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$181000 - $213000 Full time
Senior Software Engineer Data
  • Freshpaint
  • Remote
software architect technical growth

About Freshpaint:

Customer data is the fuel that drives all modern businesses. From product analytics, to marketing, to support, to advertising, advanced data analysis in the warehouse, and even sales – customer data is the raw material for each function at a modern business.

For highly regulated businesses in healthcare, it’s always been a challenge to harness that customer data and get it to the marketing and analytics tools that require it while following patient privacy laws….until now.

Something as simple as running ads to get more users is simple for an e-commerce of software company to do. But common web analytics and advertising tools collect sensitive user identifiers and healthcare information automatically. Those same tools are not HIPAA compliant.

We provide a layer of data governance to make current web analytics tools HIPAA-compliant. For analytics, our customers can continue getting the insights they need to improve the patient experience. For marketing, Freshpaint safeguards health information while helping our customers promote access to care through popular advertising platforms like Facebook, Google, and others.

In short, we help healthcare marketers promote access to care and safeguard patient privacy at the same time. This is an important, complex problem in a massive market (healthcare is 20% of the US GDP).

Our customers manage their customer data with:

  1. Privacy Platform. We help healthcare providers automate their website’s + app’s HIPAA compliance, and safeguard patient data. This is our core product today

  2. Future additional product lines! Our core product provides a platform that we're building marketing applications on top of.


We’re fully remote. If you strongly value in-person work, Freshpaint is likely not the best fit for you. Even though we don’t care where you’re located, we only hire within the US. Many of our team is concentrated in various metro areas like SF or NYC. To balance out our remote-ness, we gather the team 2x times per year for offsites. We’re backed by leading investors including Y-Combinator, Intel Capital, and angel investors like the Head of Data from Slack, Head of Data at LinkedIn, and more.

Who we are:

Freshpaint was founded by web analytics veterans who realized how hard it was for highly regulated companies to collect and use customer data in a compliant way. We started as part of Y Combinator’s S19 cohort and have been focused on enabling healthcare companies collect, safeguard, and activate patient data since.

In 2022 the government issued updated guidance around HIPAA, basically making our software a requirement to use for healthcare companies. As a result, we're one of the fastest growing software companies on earth right now.

Our team has deep analytics and growth experience, with all of us coming from high-growth companies like Heap, Pendo, Iterable, Quantum Metric, and Retool. If you value lots of freedom and ownership in your work, interfacing with customers, and working on a product with high customer impact, then Freshpaint is your home.

About the Role

At Freshpaint, we believe that strong Engineering teams are built of individuals who

  • Solve problems, not tickets – Jump into unfamiliar territory and learn what's needed to move the team forward

  • Think like owners – Focus on delivering measurable business impact rather than completing tasks

  • Elevate others – Actively mentor, unblock, and celebrate teammates, knowing the team's wins are your wins

We are looking for a Senior Software Engineer - Data to join one of our Product-oriented teams. As Freshpaint has grown, our Products have become more sophisticated and increasingly leveraged multiple sources of data. We’re seeking a Software Engineer who has competencies in Data and Data Engineering to help us shape the next generation of Freshpaint Products. We believe there’s a big opportunity ahead, and this person will contribute to the team’s success by building new products and by influencing how we incorporate data into our Product offerings.

What You’ll Do

  • Use your expertise to build Software Products that rely on data

    • Deliver business outcomes by either directly owning, or guiding others to build reliable and scalable products

    • Mentor engineers and analysts on best practices for data quality, reliability, testing, monitoring, and documentation

    • Partner closely with analytics, product, and engineering teams to identify data requirements and translate them into robust, scalable solutions

  • Join customer calls (both internal teams and external users) to hear firsthand what problems they're solving and what features actually move the needle

  • Design and refine data models that underpin product functionality while implementing monitoring systems to ensure reliability and performance

  • Collaborate with our Data Guild to define the organization’s data strategy influencing decisions on tooling, architecture, and engineering standards

  • Solve problems side-by-side with team members through a combination of pairing and solo work

If this sounds like you, we would love to chat!

What We’re Looking For

  • 5+ years of experience in building Products, either in Software Engineering, Data engineering or a closely related role

  • Strong customer orientation, with a focus on details that drive product impact and customer value

  • Proven experience building and maintaining production-grade data pipelines

  • Proficiency in application development

  • Proficiency in SQL and at least one data engineering language (e.g., Python, Scala, or Java)

  • Hands-on experience with large-scale data warehouses, regardless of specific tooling

  • Experience with data visualization and the ability to tell clear, compelling stories with data

  • Hands-on experience with modern data warehouses and data modeling best practices

  • Experience working with cloud-based data platforms (AWS, GCP, or Azure)

  • Familiarity with orchestration tools, version control, and CI/CD best practices

  • Ability to work independently, make sound architectural decisions, and thrive in ambiguous environments

  • Strong communication skills and comfort collaborating with both technical and non-technical partners

Nice to Have

  • Experience being an early data engineer at a company

  • Experience with Golang, Typescript, Data Build Tool

  • Experience with tools like Snowflake, Looker, or Fivetran

  • Experience with analytics engineering or BI tooling

  • Prior experience helping scale a data platform as the company grows

Why This Role Is Exciting

  • Build the foundation for what's next. You'll architect the data systems and strategy that power Freshpaint's future, shaping how the company scales for years to come

  • See your impact everywhere. Your work will touch every team and product at Freshpaint, giving you visibility into how engineering decisions drive real business outcomes

  • Code one day, strategize the next. You'll split your time between writing code and making architectural decisions that set technical direction, perfect if you want to keep your hands on the keyboard while influencing the big picture

Interview Process

At the start of the call, we will briefly go through a few standard verification steps to ensure we’re speaking to the right person. This helps protect both candidates and our team against AI misuse. If at any point we get the sense we aren’t speaking with the right candidate, we reserve the right to end the call early.

  • Recruiter Screen

  • Hiring Manager Call

  • Virtual Onsite with Technical Pairings

  • CEO Interview

  • Offer!

Perks & Benefits

We take care of our team—here’s a peek at what you get when you join:

  • Competitive pay + generous equity (10-year exercise window)

  • Fully remote (U.S. only) with a $150/month coworking stipend

  • Half-day Fridays, every Friday

  • Unlimited PTO—with a required 2-week minimum

  • Top-tier health, dental & vision (100% covered for you, 80% for dependents)

  • 2 “Treat Yourself” days a year—$100 and a day off, just because

  • Generous parental leave

  • Epic offsites twice a year (past trips: Greece, Jackson Hole, Cabo, wine country + more)

And more—check out our careers page for the full list.



Please mention the word **SUCCESSFULLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Software Engineer Data Platform
  • Zus Health
  • United States
software embedded system ceo

Who we are


Zus is a shared health data platform designed to accelerate healthcare data interoperability by providing easy-to-use patient data via API, embedded components, and direct EHR integrations. Founded in 2021 by Jonathan Bush, co-founder and former CEO of athenahealth, Zus partners with HIEs and other data networks to aggregate patient clinical history and then translates that history into user-friendly information at the point of care. Zus's mission is to catalyze healthcare's greatest inventors by maximizing the value of patient insights - so that they can build up, not around.


What we're looking for


We’re looking for an experienced Software Engineer to join the “Costco” team at Zus, which builds services for managing our rapidly growing bulk data offerings while adhering to complex healthcare access control requirements.


The ideal candidate will be excited to take on the challenge of processing, storing and delivering the entire health records of millions of patients, adopting tools to handle growing scale, and ensuring high data quality and freshness. You are creative, innovative and love to run experiments to explore the paths to evolve and develop our platform as we scale.


As As part of the core Zus platform, the Costco team has needed to rapidly innovate to stay ahead of data volumes that grow at 10x per year and a growing base of data-savvy customers using data to improve patient care. They are also contending with an evolving regulatory landscape in data privacy and security.


On the Costco team, you will work with microservices in Go, streaming data pipelines in AWS, and state-of-the-art data technologies including Apache Iceberg, Apache Spark, Snowflake, and dbt. Expect to learn a lot and be put on mission-critical projects with direct customer impact.

\n


As part of our team, you will
  • Build and operate data services driving our applications and APIs
  • Collaborate with team members and across Engineering to iteratively prototype and develop new functionality
  • Partner with product managers and other Zusers


You're a good fit because you
  • Learn fast and enjoy open-ended technical challenges
  • Have experience with operationally stable, scalable, and cost efficient data services
  • Enjoy owning your work and seeing it deploy safely in production
  • Are experienced using Cloud Data Warehouses such as Snowflake, Big Query, Redshift or Databricks
  • Have experience with at least one of the following: deployment technologies (GitHub Actions, CircleCI, etc.), cloud providers (AWS, Azure, GCP), and Infrastructure as Code (Terraform, CloudFormation, etc.)
  • Are excited to ~ finally! ~ enable a true digital revolution in healthcare
  • Thrive amid the changing landscape of a growing and evolving startup
  • Enjoy collaboration and solving unique problems


It would be awesome if you were
  • Experienced at working with petabyte-scale data
  • Experienced with Apache Iceberg, Apache Spark, and other large-scale data technologies
  • Experienced with AuthN/AuthZ and fine-grained access control
  • Familiar with multiple languages including either Go or Python
  • Experienced in working with healthcare data and APIs
  • Familiar with the FHIR and/or TEFCA standards


\n
$140,000 - $180,000 a year
We are a remote first company that believes that in-person interactions are beneficial. You should be comfortable traveling about once a quarter to collaborate with teammates face to face.
\n

We will offer you…


• Competitive compensation that reflects the value you bring to the team a combination of cash and equity

• Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO

• Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)


Please Note: Research shows that candidates from underrepresented backgrounds often don’t apply unless they meet 100% of the job criteria. While we have worked to consolidate the minimum qualifications for each role, we aren’t looking for someone who checks each box on a page; we’re looking for active learners and people who care about disrupting the current healthcare system with their unique experiences.


We do not conduct interviews by text nor will we send you a job offer unless you've interviewed with multiple people, including the Director of People & Talent, over video interviews. Job scams do exist so please be careful with your personal information.




Please mention the word **UNFETTERED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Staff Software Engineer
  • Office Hours
  • Remote
software system consulting technical

About Us

Office Hours is an on-demand expert network that connects leading organizations with trusted experts across various knowledge domains. Experts earn income by sharing their knowledge through advisory work, projects, and AI model training. Our platform handles the complexities behind the scenes— screening, compliance, scheduling, and payments—so knowledge sharing stays focused on meaningful insights and real impact.

We’re a hyper-growth and profitable company, quickly expanding our expert network, launching new offices, and new products. We are headquartered in San Francisco, with offices in Brooklyn and Bangalore. Our customers include the fastest-growing digital health companies, technology companies, institutional investment firms, consulting firms and AI Labs. We are backed by top marketplace investors and operators of companies like DoorDash, Airbnb, Affirm.

What we believe

Human knowledge is the world’s most valuable asset. And yet, despite being more interconnected than ever, most knowledge still remains stuck in our heads, inaccessible and underutilized. Our vision is to make human knowledge easily accessible and infinitely scalable by building tools for the new age knowledge economy.

About the role

At first glance, Office Hours looks simple: search, match, connect, and pay. Under the hood, the system is anything but.

We’re building and evolving a deeply interconnected platform spanning search, discovery, recommendations, data pipelines, logistics, payments, compliance, and performance. The entire stack has been built in-house, from expert profiles and discovery experiences to workflow automation and an underlying knowledge graph that ties everything together.

We’re looking for a Staff Full Stack Software Engineer who enjoys working across the stack, takes ownership of complex problems, and cares deeply about building thoughtful, high-quality product experiences. This is a hands-on role with real influence over product direction, technical architecture, and how we ship software.

What you’ll do

  • Own the design, implementation, and rollout of meaningful user-facing features, from problem definition through production

  • Partner closely with design, product, and client-facing teams to translate real user needs into shipped solutions

  • Architect, build, and evolve scalable, reliable systems across the front end, back end, and infrastructure

  • Set a high bar for code quality through clear implementations, thoughtful tradeoffs, and active participation in reviews and technical discussions

  • Explore and integrate modern tools, including AI-powered workflows, and share learnings that improve how the team builds and ships

What you bring

  • 8+ years of professional software engineering experience, with meaningful time spent working across the stack

  • A track record of shipping high-quality, user-facing products in production environments

  • Strong product intuition and the ability to translate ambiguous user or business problems into technical solutions

  • Comfort operating in fast-moving environments where priorities evolve and ownership matters

  • A bias toward action, paired with sound judgment and attention to detail

Our tech stack

  • Back end: Node.js, Typescript, MongoDB & Postgres, OpenSearch, Temporal

  • Front end: React, Next.js, Tailwind, shadcn

  • Infrastructure: AWS, Kubernetes, Docker, Datadog, Sentry

  • Workflow: GitHub, Slack, Notion, Figma, Linear, PostHog, Metabase

Benefits + Perks

  • Competitive salary and equity

  • Medical, dental, and vision coverage

  • 401(k)

  • Monthly wellness and fitness stipend

  • Paid time off policy, along with company holidays

  • Annual company off-sites (Tahoe, Mendocino, Mexico City, San Diego, Park City)

  • Parent-friendly policies, remote flexibility, and paid family leave

Pay Transparency Notice

Full-time offers include base salary, equity, and benefits.

Pay range: $225,000- $250,000 based on seniority and relevant experience

*This role can be 100% remote, but we do have offices in San Francisco and NYC

Don’t meet every single requirement? Studies have shown that some candidates, especially underrepresented groups such as women and people of color, are less likely to apply to jobs unless they meet every single qualification. At Office Hours we believe in building a diverse and inclusive workplace, so if you’re excited about this role but don’t meet every qualification in the job description, we still encourage you to apply. You could still be the right candidate for this or other roles at Office Hours!



Please mention the word **LIGHTER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$170000 - $190000 Full time
software assistant design system

Who is Flock?

Flock Safety is the leading safety technology platform, helping communities thrive by taking a proactive approach to crime prevention and security. Our hardware and software suite connects cities, law enforcement, businesses, schools, and neighborhoods in a nationwide public-private safety network. Trusted by over 5,000 communities, 4,500 law enforcement agencies, and 1,000 businesses, Flock delivers real-time intelligence while prioritizing privacy and responsible innovation.

We’re a high-performance, low-ego team driven by urgency, collaboration, and bold thinking. Working at Flock means tackling big challenges, moving fast, and continuously improving. It’s intense but deeply rewarding for those who want to make an impact.

With nearly $700M in venture funding and a $7.5B valuation, we’re scaling intentionally and seeking top talent to help build the impossible. If you value teamwork, ownership, and solving tough problems, Flock could be the place for you.

The Opportunity

We're hiring a Senior Software Engineer to build Night Shift, a conversational AI assistant that helps investigators surface critical evidence and close cases faster. You'll design and implement the conversational interface, build the orchestration backend that manages LLM interactions and tool calling, and develop integration pipelines connecting our AI to Flock's existing data platform and APIs. This is a ground-floor opportunity where product thinking matters as much as technical execution: you'll shape chat experiences with complex context management, partner with platform teams to design new APIs or leverage existing ones, and solve the reliability challenges of deploying AI in high-stakes investigative workflows. You'll collaborate closely with ML engineers on prompt engineering and agentic workflows while maintaining a strong point of view on what makes a great user experience. If you've built LLM-powered products and thrive at the intersection of customer impact and technical depth, this role is for you.

The Skillset

  • Love for coding and continuous learning, especially in the rapidly evolving LLM space

  • Resourceful problem-solver mindset: excel in ambiguous situations and take initiative to define product direction

  • Strong TypeScript / Node / Express skills for web services and API design (REST, SSE, WebSockets for streaming)

  • Modern web framework expertise (React / TypeScript preferred), particularly for conversational UI and chat interfaces

  • Hands-on LLM experience: OpenAI/Anthropic/Gemini APIs, prompt engineering, streaming responses, and conversation context management

  • Familiarity with agentic patterns: function calling, tool use (MCP), and orchestrating multi-step workflows

  • API integration skills: consume existing APIs or design new ones to ground AI in investigative data

  • Database confidence: PostgreSQL and sophisticated SQL for data retrieval

  • Cloud infrastructure basics: Docker, Kubernetes (Helm), AWS services (S3, SQS, API Gateway)

  • Product-minded: translate user feedback into technical requirements and make pragmatic tradeoffs

  • Bonus points for: LLM evaluation tools (LangSmith, Langfuse), vector search/RAG, microservices architecture, or Terraform

90 Days at Flock

The First 30 Days

  • Onboard and Integrate:

    • Familiarize yourself with Flock's mission, investigative workflows, and how customers use our platform today

    • Pair with engineers across Cloud Software and ML teams to understand existing APIs, data models, and system architecture

    • Build relationships with key stakeholders to understand their capabilities and constraints. Meet with members of:

      • Machine Learning (agentic systems, model serving)

      • Data Engineering (investigative datasets, pipelines)

      • Platform teams (APIs, infrastructure)

      • Product and Design (customer needs, UX direction)

  • Ship Early and Learn:

    • Complete a first-day push to production

    • Pick up initial sprint tickets: bug fixes, small UX improvements, or API integrations

    • Participate in customer feedback sessions to understand investigator workflows and pain points

The First 60 Days

  • Build the Foundation:

    • Deliver core conversational UI components and establish patterns for chat interfaces

    • Implement backend orchestration for LLM interactions and tool calling

    • Stand up observability for the AI system (logging, tracing, basic metrics)

    • Work with ML team to integrate agentic workflows and refine prompt strategies

  • Demonstrate Velocity:

    • Own end-to-end features that connect UI, backend orchestration, and data integrations

    • Collaborate with Product to rapidly iterate based on early user testing

    • Propose technical improvements to chat quality, performance, or reliability

90 Days & Beyond

  • Drive Product Impact:

    • Lead development of a core Night Shift capability that demonstrably improves investigator efficiency

    • Represent the team in cross-functional initiatives, balancing zero-to-one experimentation with engineering best practices

    • Establish patterns for testing and quality in an evolving AI product

  • Shape the Direction:

    • Influence product roadmap through technical insights and customer feedback

    • Mentor team members on LLM integration patterns or full-stack best practices

    • Own a domain area (e.g., conversation management, data grounding, streaming architecture)

The Interview Process

We want our interview process to be a true reflection of our culture: transparent and collaborative. Throughout the interview process, your recruiter will guide you through the next steps and ensure you feel prepared every step of the way. To check out our interview stages and how you should prepare visit experiences on our careers page.

Salary & Equity

In this role, you’ll receive a starting salary of $170,000-$185,000 as well as stock options. Base salary is determined by job-related experience, education/training, as well as market indicators. Your recruiter will discuss this in-depth with you during our first chat.

The Perks

🌴Flexible PTO: We seriously mean it, plus 11 company holidays.

⚕️Fully-paid health benefits plan for employees: including Medical, Dental, and Vision and an HSA match.

👪Family Leave: All employees receive 12 weeks of 100% paid parental leave. Birthing parents are eligible for an additional 6-8 weeks of physical recovery time.

🍼Fertility & Family Benefits: We have partnered with Maven, a complete digital health benefit for starting and raising a family. Flock will provide a $50,000-lifetime maximum benefit related to eligible adoption, surrogacy, or fertility expenses.

🧠Spring Health: Spring Health offers a variety of mental health benefits, including therapy, coaching, medication management, and digital tools, all tailored to each individual's needs.

💖Caregiver Support: We have partnered with Cariloop to provide our employees with caregiver support

💸Carta Tax Advisor: Employees receive 1:1 sessions with Equity Tax Advisors who can address individual grants, model tax scenarios, and answer general questions.

💚ERGs: We want all employees to thrive and feel like they belong at Flock. We offer three ERGs today - Women of Flock, Flock Proud, and Melanin Motion. If you are interested in talking to a representative from one of these, please let your recruiter know.

💻WFH Stipend: $150 per month to cover the costs of working from home.

📚Productivity Stipend: $300 per year to use on Audible, Calm, Masterclass, Duolingo, Grammarly and so much more.

🏠Home Office Stipend: A one-time $750 to help you create your dream office.

If an offer is extended and accepted, this position requires the ability to obtain and maintain Criminal Justice Information Services (CJIS) certification as a condition of employment. Applicants must meet all FBI CJIS Security Policy requirements, including a fingerprint-based background check.

Flock is an equal opportunity employer. We celebrate diverse backgrounds and thoughts and welcome everyone to apply for employment with us. We are committed to fostering an environment that is inclusive, transparent, and collaborative. Mutual respect is central to how Flock operates, and we believe the best solutions come from diverse perspectives, experiences, and skills. We embrace our differences and know that we are stronger working together.

If you need assistance or an accommodation due to a disability, please email us at recruiting@flocksafety.com. This information will be treated as confidential and used only to determine an appropriate accommodation for the interview process.

At Flock Safety, we compensate our employees fairly for their work. Base salary is determined by job-related experience, education/training, as well as market indicators. The range above is representative of base salary only and does not include equity, sales bonus plans (when applicable) and benefits. This range may be modified in the future. This job posting may span more than one career level.



Please mention the word **EMPOWERMENT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • ARB Interactive
  • Miami
security python game technical

At ARB Interactive, creativity, tech, and play collide. Founded in 2022, we've grown to nearly 200 team members and were named one of LinkedIn's ​2025 Top 50 Startups in the United States​! We move fast, think big, and love bold ideas that push boundaries (and buttons). From new rewards to fresh game mechanics, every challenge is a chance to innovate and have fun doing it. Our culture is collaborative, curious, and full of laughter because great ideas grow best between coffee, code, and a few epic high-fives.

Summary

We’re looking for a Senior Data Engineer to help shape and expand the foundation of our modern data stack. This is a hands-on role for someone who’s excited to build and improve robust, scalable pipelines and collaborate cross-functionally to turn raw data into business-critical insights.

As a senior member of the team, you’ll play a key role in technical decision-making, partnering closely with analytics, engineering, product, and other talented and collaborative teammates, to help ensure our systems scale with the business. If you’re passionate about solving real-world complex data challenges, in order to move the needle in a high-growth environment, this role provides the perfect blend of a technical challenge and meaningful impact.

This is a great opportunity for someone who thrives on hands-on execution but also enjoys mentoring others, guiding architectural decisions, and helping shape the future of the data function.

Responsibilities

  • Design, build, and maintain scalable, efficient ETL/ELT pipelines

  • Model clean, trusted datasets to support analytics, experimentation, and reporting

  • Optimize our data infrastructure for performance, cost, governance, and maintainability

  • Partner with data analysts and product teams to improve data accessibility and accuracy

  • Enable self-service analytics by designing intuitive data models and comprehensive documentation

  • Implement robust data quality frameworks, monitoring, alerting and observability to ensure data reliability

  • Collaborate with product and engineering on instrumentation of new product features and events

  • Mentor junior team members, contribute to code reviews, and share best practices

  • Influence the long-term direction of our data architecture and tooling

  • Take on team leadership or people management responsibilities as the team scales

Requirements

  • 5+ years of experience in data engineering or related roles

  • Strong SQL and Python skills, with a focus on readable and efficient code

  • Deep understanding of data warehousing concepts and data modeling best practices

  • Hands-on experience with tools in the modern data stack (e.g., dbt, Airflow, Snowflake, BigQuery, Redshift)

  • Strong communication and collaboration skills; able to work cross-functionally with analysts, PMs, and engineers

  • A bias toward action and ownership; you thrive in fast-paced, high-autonomy environments

Nice to Have

  • Experience in gaming, entertainment, or high-volume consumer applications

  • Familiarity with event tracking platforms (e.g., Segment, Amplitude)

  • Experience hiring or onboarding engineers in a high-growth environment

Diversity Commitment: We are focused on building a diverse and inclusive team. We welcome people of all backgrounds, experiences, abilities, and perspectives and are an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Important Security Notice: Our recruitment team will only contact candidates through official channels using @arbinteractive.com email addresses and via our recruiting platform, Ashby. If you find a position on a third party careers page (LinkedIn, Indeed, etc.), the job posting will redirect you to our careers page (https://jobs.ashbyhq.com/arb-interactive) to begin your application. We will never request payment, banking information, or personal identification details during the application process.

If you're ever uncertain about the legitimacy of communication claiming to be from our company, please forward it to recruiting@arbinteractive.com for verification before responding or clicking any links.



Please mention the word **SMOOTHES** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer
  • TextNow
  • Open- Canada
python support travel cloud

We believe communication belongs to everyone. We exist to democratize phone service.  TextNow is evolving the way the world connects, and that's because we're made up of people with curious minds who bring an optimistic yet critical lens into the work we do.   We're the largest provider of free phone service in the nation. And we're just getting started. 

 

Join us in our mission to break down barriers to communication and free the flow of conversation for people everywhere. 

 

TextNow is looking for an experienced Data Engineer with hands-on experience designing and developing data platforms. You will own the design, development, and maintenance of TextNow's data platform, enabling us to make effective data-informed decisions. You will be part of cross-functional efforts to build scalable and reliable frameworks that support allTextNow's business and data products. In this role, you can interact with different functional areas within the business and influence decision-making in a fast-growing mobile communications start-up.   

\n


What You'll Do
  • Own TextNow's data warehouse, data pipelines, and integration points between various business systems. 
  • Design, develop, and support new and existing batch and real-time data pipelines, and recommend improvements or modifications. 
  • Manage data models to enable AI/ML data products. 
  • Champion TextNow's data ecosystem by working with engineering and infrastructure teams to enable quicker access to data for insights and decision-making. 
  • Communicate data modeling and architecture processes to cross-functional teams. 
  • Identify, design, and implement process improvements across the data platform. 


Who You Are
  • Have 3–5 years of experience working with data warehouse/data lake and ETL architectures(e.g.,databricks, iceberg), cloud data warehouses (e.g., Snowflake), and hands-on experience in Python and SQL — preferably in companies with fast-growing and evolving data needs. 
  • Have at least 2 years of experience with Airflow and Spark. 
  • Have developed scalable, real-time data pipelines using Python/Scala, SQL, and distributed processing frameworks such as Spark or Flink. 
  • Have exposure to the AWS platform and services such as EKS, MSK, and MWAA (preferred). 
  • Have experience building data features using Snowflake, dbt, and Python to power real-time AI/ML inference. 
  • Are respectfully candid, with the ability to initiate and drive tasks to completion. 
  • Are highly organized, dependable, and follow a structured work approach. 


\n
$88,900 - $127,000 a year
Final compensation will be determined based on a number of factors, including skills, experience, location and on-the-job performance. We’re committed to paying competitively to hire and retain high-caliber talent. We recognize that exceptional talent may fall outside of these ranges; we encourage all qualified candidates to apply even if their compensation expectations are outside of the listed range.
\n

More about TextNow...


Our Values:

·  Customer Obsessed (We strive to have a deep understanding of our customers)

·  Do Right By Our People (We treat each other with fairness, respect, and integrity)

·  Accept the Challenge (We adopt a "Yes, We Can" mindset to achieve ambitious goals)

·  Act Like an Owner (We treat this company like it's our own... because it is!)

·  Give a Damn! (We are deeply commited and passionate about our work and achieving results)


Benefits, Culture, & More:

·   Strong work life blend 

·   Flexible work arrangements (wfh, remote, or access to one of our office spaces)

·   Employee Stock Options 

·   Unlimited vacation 

·   Competitive pay and benefits

·   Parental leave

·   Benefits for both physical and mental well being (wellness credit and L&D credit)

·   We travel a few times a year for various team events, company wide off-sites, and more


Diversity and Inclusion:

At TextNow, our mission is built around inclusion and offering a service for EVERYONE, in an industry that traditionally only caters to the few who have the means to afford it. We believe that diversity of thought and inclusion of others promotes a greater feeling of belonging and higher levels of engagement. We know that if we work together, we can do amazing things, and that our differences are what make our product and company great. 


TextNow Candidate Policy

By submitting an application to TextNow, you agree to the collection, use, and disclosure of your personal information in accordance with the TextNow Candidate Policy



Please mention the word **COOPERATIVELY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Analyst
  • TextNow
  • Open- Canada
analyst python support growth

We believe communication belongs to everyone. We exist to democratize phone service.  TextNow is evolving the way the world connects and that's because we're made up of people with curious minds who bring an optimistic, yet critical lens into the work we do.   We're the largest provider of free phone service in the nation. And we're just getting started.


Join us in our mission to break down barriers to communication and free the flow of conversation for people everywhere.


TextNow is looking for a motivated Senior Data Analyst to join our Analytics & Insights team. You’ll drive data-informed decision-making across the organization by translating business problems into analytical solutions, designing insightful dashboards, and uncovering trends that shape strategic actions.

This role is perfect for someone with strong analytical skills, deep business acumen, and a passion for using data to tell stories that inspire action.


What You’ll Do


Analyze complex datasets to identify actionable insights, trends, and opportunities

Develop and maintain dashboards, reports, and data visualizations using tools like Looker, Tableau, Power BI, or Redash

Conduct ad hoc analyses to support product, marketing, and operations initiatives

Partner with data engineering teams to ensure data quality, integrity, and availability

Develop and maintain KPI frameworks and performance measurement systems

Assist in building scalable data models and automation pipelines

Collaborate cross-functionally with Product, Finance, Marketing, and Operations teams to define analytical needs

Translate business questions into data requirements and present insights and recommendations to senior leadership

Mentor junior analysts and foster a culture of data-driven decision-making

Define and standardize analytical best practices across the organization


You’ll Be a Great Fit If You Have


Bachelor’s degree in Data Science, Statistics, Mathematics, Economics, Computer Science, or a related field (Master’s preferred)

5+ years of experience in data analytics or business intelligence

Proficiency in SQL and at least one programming language (e.g., Python or R)

Experience with modern BI tools (Looker, Tableau, Power BI, Mode, or Redash)

Strong understanding of A/B testing, statistical analysis, and data modeling

Experience working with large-scale datasets and cloud-based environments (e.g., Snowflake, Eppo)

Excellent communication and storytelling skills with data

Attention to detail, analytical rigor, and curiosity for continuous improvement


Preferred Skills


Experience in telecommunications, SaaS, or consumer app environments

Familiarity with machine learning concepts and predictive analytics

Understanding of ETL processes and data warehousing fundamentals

Experience collaborating with product teams on experimentation and growth analytics


Estimated Base Salary Range by Location:


Canada (CAD): $103,700 – $140,300

US – National (USD): $114,800 – $155,300

Final compensation will be determined based on a number of factors, including skills, experience, location, and on-the-job performance. We’re committed to paying competitively to hire and retain high-caliber talent. We recognize that exceptional talent may fall outside of these ranges; we encourage all qualified candidates to apply even if their compensation expectations are outside of the listed range.

\n


\n

More about TextNow...


Our Values:

·  Customer Obsessed (We strive to have a deep understanding of our customers)

·  Do Right By Our People (We treat each other with fairness, respect, and integrity)

·  Accept the Challenge (We adopt a "Yes, We Can" mindset to achieve ambitious goals)

·  Act Like an Owner (We treat this company like it's our own... because it is!)

·  Give a Damn! (We are deeply committed and passionate about our work and achieving results)


Benefits, Culture, & More:

·   Strong work life blend 

·   Flexible work arrangements (wfh, remote, or access to one of our office spaces)

·   Employee Stock Options 

·   Unlimited vacation 

·   Competitive pay and benefits

·   Parental leave

·   Benefits for both physical and mental well being (wellness credit and L&D credit)

·   We travel a few times a year for various team events, company wide off-sites, and more


Diversity and Inclusion:

At TextNow, our mission is built around inclusion and offering a service for EVERYONE, in an industry that traditionally only caters to the few who have the means to afford it. We believe that diversity of thought and inclusion of others promotes a greater feeling of belonging and higher levels of engagement. We know that if we work together, we can do amazing things, and that our differences are what make our product and company great. 


TextNow Candidate Policy

By submitting an application to TextNow, you agree to the collection, use, and disclosure of your personal information in accordance with the TextNow Candidate Policy



Please mention the word **WISELY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineer
  • Loop
  • Remote
python growth code cloud

The Data team at Loop is on a mission to empower merchants with transformative data products that drive success beyond returns. By building tools that merchants love and fostering a robust data culture, the team enables smarter decision-making across the board. Whether creating insights to guide merchants’ strategies or strengthening internal data-driven processes, the Data team is integral to shaping Loop’s future and unlocking new opportunities for our merchants and teams alike.


As a Data Engineer at Loop, you’ll have the chance to significantly impact our ability to solve merchant problems and fulfill merchant needs. You’ll be an integral member of the team, owning all aspects of data availability, quality, and ease of use of our data platforms. Your success in this role will depend on a healthy blend of creativity and structure with a continuous focus on delivering value to the business.


At Loop, we’re intentional about the way we work so that we can do our best work. We call this our Blended Working Environment. We work from our HQ in Columbus, OH, or one of our Hub or Secluded locations, and are distributed throughout the United States, select Canadian provinces, and the United Kingdom. For this position, we’re looking for someone to join us in a location where we already have an established Hub or HQ.


Our data stack: Snowflake, Fivetran, dbt, GoodData, Secoda

\n


What you’ll do:
  • Maintain and optimize existing data pipelines and warehouse solutions for performance, reliability, and cost efficiency. 
  • Support internal analytics and ML teams with data modeling, schema updates, and ad hoc data needs. 
  • Contribute to dbt projects and assist in ensuring data quality, observability, and accessibility. 
  • Write clean, tested, and documented code, and participate in code reviews. 
  • Collaborate with senior data engineers to understand and contribute to new ingestion sources, ML pipelines, and other forward-looking initiatives. 
  • Ensure internal stakeholders can access and use data effectively, enabling faster business insights and decision-making.


Your experience:
  • 4 years of hands-on experience building and maintaining data pipelines and data sets in a cloud environment (Snowflake, GBQ, Redshift, etc.). *We're expecting top candidates to have hands-on experience with Snowflake, specifically!
  • 2+ years of Python experience, creating reliable workflows and data processing scripts. 
  • Strong SQL skills and experience with data modeling. 
  • Experience with dbt or similar transformation tools. Familiarity with distributed systems and ETL/ELT processes.
  • Nice to have: Experience with data observability, lineage, or governance tools. 
  • Nice to have: Exposure to BI tools and supporting analytics teams. 
  • Nice to have: Experience working on cross-functional data projects. 
  • Nice to have: Familiarity with Fivetran, Kafka, or modern data integration platforms. 


Our Data Team values
  • Progress over perfection and focus on delivering value. 
  • Strong, open, and continuous collaboration with peers and stakeholders. 
  • Autonomy and accountability. 
  • Drive to solve problems. 
  • Engagement and participation in our Agile practices.


\n
$118,400 - $177,600 a year
We know that making decisions about your career and compensation is a huge deal. Because of that, we’re incredibly thoughtful about our compensation strategy. We want you to feel safe and excited, but also comfortable with the compensation package of a startup. We’ve outlined some important information for you here, but please know there’s a lot more to compensation than we can cover in this job posting. 

The posted salary range is the base salary for this opportunity. The salary range is subject to change, and may be adjusted in the future.

The actual annual salary paid for this position will be based on several factors, including, but not limited to: your prior experience and skills related to the position, geographic location, company needs, current market demands, and your total compensation goals. 

Great humans deserve great benefits. At Loop, you’ll be eligible for benefits such as: medical, dental, and vision insurance, flexible PTO, company holidays, sick & safe leave, parental leave, 401k, monthly wellness benefit, home workstation benefit, phone/internet benefit, and equity.
\n

#LI-ST1


Loop Story


Commerce should feel effortless. Every product adored, every order perfect, every customer loyal for life. But reality is messier: operations get tangled, margins grow thin, and trust is fragile. That’s where Loop steps in. We create confidence where commerce fails.


We started by fixing returns and exchanges. Today, we’re building a connected commerce operations suite — powering everything from order tracking to fraud prevention, with hundreds of innovations in between. Grounded in data and insight, our platform helps merchants make smarter decisions with every transaction. Over 5,000 of the world’s most loved brands trust Loop to turn cost centers into growth engines. Our mission is simple: protect margins, delight customers, and help merchants build businesses that last.


Life at Loop is rooted in our core values. We balance high empathy with high standards, knowing that work is better when we can show up authentically and resilience is built by facing challenges head-on. We expect you’ll grow quickly, learning skills that last far beyond your time here. Loop is a formative chapter in your career — a chance to shape the future of commerce and to leave better than when you arrived.


Learn more about us here: https://loopreturns.com/careers.


You can review our privacy notice here.



Please mention the word **LIBERATION** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
design system python music

At Spotify, we're building the revenue platform that drives how revenue and taxes are processed across the company — enabling reliable, scalable financial operations across every market, product line, and partner. Our systems are essential to Spotify’s ability to earn, track, and report revenue and taxes, supporting everything from subscriptions and advertising to creator payouts.


As engineers on this team, we design and maintain the backend and data platform capabilities that power millions of transactions each day with precision. We build services that handle tax calculations, produce compliant financial records, and support regulatory requirements across global markets — all while staying agile to keep up with Spotify’s evolving business models. We equip Finance teams with flexible, configurable tools that govern how revenue and taxes are applied across products, enabling rapid adjustments without needing deep technical expertise. Our modular, process-oriented components simplify the development, maintenance, and scaling of the critical Order to Cash enterprise process that underpin Spotify’s financial operations.

\n


What You'll Do
  • Gain deep expertise in Spotify’s revenue platform, understanding how it enables financial operations, compliance, and strategic decision-making.
  • Design and implement scalable backend and data systems that process millions of transactions daily — supporting accurate tax calculation, billing, revenue recognition, financial configuration, and tax reporting.
  • Build intuitive, self-serve tools that empower Finance teams to define and manage product-specific revenue and tax configuration independently, without requiring engineering involvement.
  • Develop and enhance modular platform capabilities that encodes critical enterprise workflows, promoting consistency, reusability, and ease of maintenance across financial systems.
  • Lead the creation of new platform capabilities within the Tax Solutions space, focusing on Tax Reporting and global regulatory compliance.
  • Partner closely with Engineers, Product and Finance stakeholders to design systems that are scalable, auditable, and highly reliable.
  • Champion engineering best practices, strong architectural design, and operational excellence across backend and data platforms.
  • Foster a collaborative team culture rooted in shared ownership, constructive feedback, and continuous improvement.


Who You Are
  • You have experience in data engineering, including building and maintaining data pipelines.
  • You are proficient in Python and ideally Scala or Java
  • You possess a foundational understanding of system design, data structures, and algorithms, coupled with a strong desire to learn quickly, embrace feedback, and continuously improve your technical skills.
  • You’re familiar with cloud-native development and deployment, ideally within the Google Cloud Platform.
  • You think critically about system design and strive to build solutions that are reliable, maintainable, and auditable at scale.
  • You have good communication skills and can articulate your ideas and ask clarifying questions.
  • You love collaborating with others.
  • You thrive in ambiguous and fast-changing environments, and know how to make progress even when requirements are evolving.
  • You approach platform engineering with empathy for your users - prioritising usability, configurability, and long-term sustainability.
  • You care deeply about code quality, testing, and documentation, and you aim to build systems that are easy to understand and operate.
  • You enjoy collaborating across functions and bring clarity and alignment when working with engineering, finance, and product partners.
  • You’re naturally curious, self-motivated, and always looking for ways to grow your technical skills and improve how things are done.


Where You'll Be
  • This role is based in London, United Kingdom.
  • We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home.


\n

Spotify is an equal opportunity employer. You are welcome at Spotify for who you are, no matter where you come from, what you look like, or what’s playing in your headphones. Our platform is for everyone, and so is our workplace. The more voices we have represented and amplified in our business, the more we will all thrive, contribute, and be forward-thinking! So bring us your personal experience, your perspectives, and your background. It’s in our differences that we will find the power to keep revolutionizing the way the world listens.


At Spotify, we are passionate about inclusivity and making sure our entire recruitment process is accessible to everyone. We have ways to request reasonable accommodations during the interview process and help assist in what you need. If you need accommodations at any stage of the application or interview process, please let us know - we’re here to support you in any way we can.


Spotify transformed music listening forever when we launched in 2008. Our mission is to unlock the potential of human creativity by giving a million creative artists the opportunity to live off their art and billions of fans the chance to enjoy and be passionate about these creators. Everything we do is driven by our love for music and podcasting. Today, we are the world’s most popular audio streaming subscription service.



Please mention the word **NOBLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Ethena Labs
  • Globally Remote
crypto back-end python cto

Who We Are and What We are Doing:

Ethena Labs is actively building and deploying a suite of groundbreaking digital dollar products aiming to upgrade money into the internet era.


Our flagship product, USDe, is a synthetic dollar backed by digital assets, and takes the novel approach of using a delta-neutral hedged basis strategy to maintain its peg. This product scaled from zero to $15b in 18 months.


Expanding on this, iUSDe is designed specifically for traditional financial institutions, incorporating necessary compliance features to enable them to access the crypto-native rewards our protocol generates, in an institutional-friendly manner.


Ethena has also developed USDtb: a fiat backed GENIUS compliant stablecoin in partnership with BlackRock which has scaled to ~$2b.


These products are also offered in a whitelabel stablecoin offering where any application, chain, wallet or exchange can launch their own stablecoin on Ethena's back-end infrastructure.


Through these offerings, Ethena Labs is not just creating new financial products; we are building the foundational infrastructure for a more open, efficient, and interconnected global financial system.


Open job offerings will be focused on two new major product lines coming to market in the next few months.


Join us!!


The Senior Data Engineer is a critical role reporting directly to the CTO. The primary mission is to rapidly deliver a reliable, production-ready market data platform that serves as the single source of truth for trading, risk, and business intelligence.


You’ll immediately own the entire data platform from inception and deliver working historical and real-time Tardis pipelines in the first 60 days. Beyond the initial MVP, the role requires iteratively evolving the platform into a best-in-class, cloud-native, observable, and self-service system. You will work hand in hand with the CTO & trading team to scope & deliver to business needs. The Senior Data Engineer will also serve as the go-to data expert for the firm and will be responsible for mentoring future junior data engineers or analysts.


\n


What You’ll Do
  • Rapidly spin up the cloud environment. Deliver working historical backfill pipelines from Tardis.dev into a queryable database.
  • Deliver a real-time Tardis WebSocket pipeline, ensuring data is normalized, cached for live consumption, accurate, replayable, and queryable by Day 60.
  • Ensure all pipelines are idempotent, retryable, and use exactly-once semantics. Implement full CI/CD, Terraform, automated testing, and secrets management.
  • Implement proper observability (structured logs, metrics, dashboards, alerting) from day one. Provide immediate self-service access to the MVP database for Trading and BI teams via tools like Tableau/Metabase, and through simple internal REST APIs.
  • Develop specialized timeseries data, including USDe backing-asset and a full opportunity-surface timeseries for Delta-neutral/lending/borrow opportunities.
  • Ingest data from additional sources (Kaiko, CoinAPI, on-chain via TheGraph/Dune). Plan for 10x+ data growth via schema evolution, partitioning, and performance tuning. Establish enterprise-grade governance, including a data quality framework, RBAC, audit logs, and a semantic layer.
  • Create full architecture documentation, runbooks, and a data dictionary. Onboard and mentor future junior staff.


What We’re Looking For
  • Proven track record of delivering working, production data in weeks, not months, with the ability to ruthlessly cut scope to hit a 60-day MVP while managing technical debt.
  • Have built Tardis historical and real-time pipelines before (or equivalent high-quality crypto market data feeds), understanding specific quirks, rate limits, and WebSocket structures.
  • Expert in large-scale, reliable ETL/ELT for financial or market data.
  • Fluent in provisioning full environments with Terraform in days and expert in AWS/GCP serverless technologies.
  • Expert Python and SQL skills and proficiency with time-series databases like TimescaleDB or ClickHouse, ensuring fast queries from day one.
  • Advanced knowledge of WebSocket clients, message queues, and low-latency streaming, GitOps, automated testing/deploy and observability practices.
  • Significant understanding of stablecoins, lending protocols, and opportunity surface concepts, or a proven ability to ramp up extremely quickly.


\n

Why Ethena Labs?


You'd be joining a group that has well established itself as one of the most successful crypto-native company's of all time, a group with a mission to revolutionise decentralised finance and it's position in global finance.


Work alongside a passionate and innovative team that values collaboration and creativity.

Enjoy a flexible, remote-friendly work environment with established opportunities for personal growth and learning.


If you subscribe to the mission of separating the dollar from the state, then we want to hear from you!


We look forward to receiving your application and will be in touch after having a chance to review. 


In the meantime, here are some links to more information about Ethena Labs to help you check us out:

Website

Twitter/X

LinkedIn



Please mention the word **INFALLIBILITY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$60000 - $80000 Full time
Data Engineer
  • Sayari
  • Remote - US
python software code financial

About Sayari: 

Sayari is a risk intelligence provider that equips the public and private sectors with immediate visibility into complex commercial relationships by delivering the largest commercially available collection of corporate and trade data from over 250 jurisdictions worldwide. Sayari's solutions enable risk resilience, mission-critical investigations, and better economic decisions. 

Headquartered in Washington, D.C., its solutions are trusted by Fortune 500 companies, financial institutions, and government agencies, and are used globally by thousands of users in over 35 countries. Funded by world-class investors, with a strategic $228 million investment by TPG Inc. (NASDAQ: TPG) in 2024, Sayari has been recognized by the Inc. 5000 and the Deloitte Technology Fast 500 as one of the fastest growing private companies in the United States and was featured as one of Inc.’s “Best Workplaces” for 2025.

POSITION DESCRIPTION

Sayari is looking for an Entry-Level Data Engineer to join our Data team located in Washington, DC. The Data team is an integral part of our Engineering division and works closely with our Software & Product teams, as well as other key stakeholders across the business.

JOB RESPONSIBILITIES:

  • Write and deploy crawling scripts to collect source data from the web
  • Write and run data transformers in Scala Spark to standardize bulk data sets
  • Write and run modules in Python to parse entity references and relationships from source data
  • Diagnose and fix bugs reported by internal and external users
  • Analyze and report on internal datasets to answer questions and inform feature work
  • Work collaboratively on and across a team of engineers using basic agile principles
  • Give and receive feedback through code reviews

SKILLS & EXPERIENCE

Req

Please mention the word **HARMLESS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Data Scientist
  • Arbol
  • New York City, New York
back-end python support fintech

Arbol is a global climate risk coverage platform and FinTech company offering full-service solutions for any business looking to analyze and mitigate exposure to climate risk. Arbol’s products offer parametric coverage which pays out based on objective data  triggers rather than subjective assessment of loss. Arbol’s key differentiator versus traditional InsurTech or climate analytics platforms is the complete ecosystem it has built to address climate risk. This ecosystem includes a massive climate data infrastructure, scalable product development, automated, instant pricing using an artificial intelligence underwriter, blockchain-powered operational efficiencies, and non-traditional risk capacity bringing capital from non-insurance sources. By combining all these factors, Arbol brings scale, transparency, and efficiency to parametric coverage.


In this role, you will research, develop, and apply machine learning tools to model and price climate and weather risk. You will work with diverse weather and geospatial datasets covering a suite of phenomena, from traditional weather-station readings of temperature and precipitation, to radar measurements of hail stone sizes, to satellite indices of vegetation content. You will learn how to use our existing catalog of pricing and modeling tools, engage in their improvement and maintenance, and develop new methodologies. We are open to a range of experience levels for this position.



About the Team

The analytics team is responsible for making sense of the terabytes of data Arbol has at its disposal. It forms the connective tissue between more client-facing teams, such as sales, and back-end roles like data engineering. You’ll be joining a small team of data scientists and researchers and will have a unique opportunity to impact many levels of the firm. This is an ideal position for someone interested in building machine learning systems while taking a deep dive into the insurance industry.

\n


What You'll Be Doing
  • Collaborate within the analytics team and across teams to gain expertise Arbol’s data/pricing infrastructure and products
  • Develop and improve models for climate and weather perils such as heat waves, severe convective storms, and tropical cyclones
  • Implement, assess, and execute pricing algorithms for a wide array of weather risks
  • Work with sales and executive teams to perform business-critical analytics


What You'll Need
  • BA in statistics, computer science, mathematics, or related quantitative field
  • Experience programming in Python and familiarity with common data science packages (Pandas, Numpy, scikit-learn)
  • Experience analyzing large datasets
  • Strong problem solving and analytical skills
  • Comfort with statistics (e.g., linear regression, hypothesis testing)
  • Willingness to work and learn in a fast-paced environment


\n
$95,000 - $125,000 a year
\n

Essential Job Functions & Physical Requirements

Ability to sit for extended periods of time while working at a computer, with or without reasonable accommodation

Ability to use a computer, keyboard, mouse, and standard office equipment (e.g., phone, printer, scanner)

Ability to view a computer screen for prolonged periods, with or without reasonable accommodation

Ability to communicate effectively in person, by phone, and via email

Ability to occasionally stand, walk, bend, and reach within an office environment

Ability to lift and/or move up to 10–15 pounds occasionally (e.g., office supplies, files), with or without reasonable accommodation

Ability to perform repetitive motions, such as typing or data entry

Ability to maintain focus and attention while performing detailed tasks



Interested, but you don’t meet every qualification? Please apply!

Arbol values the perspectives and experience of candidates with non-traditional backgrounds and we encourage you to apply even if you do not meet every requirement.


Accessibility

Arbol is committed to accessibility and inclusivity in the hiring process. As part of this commitment, we strive to provide reasonable accommodations for persons with disabilities to enable them to access the hiring process. If you require an accommodation to apply or interview, please contact hr@arbol.io


Benefits

Arbol is proud to offer its full-time employees competitive compensation and equity in a high-growth startup.  Our health benefits include comprehensive health, dental, and vision coverage, and an optional flexible spending account (FSA) to support your health.  We offer a 401(k) match to support your future, and flexible PTO for you to relax and recharge. 


Equal Opportunity Employer

Arbol is an Equal Opportunity Employer and does not discriminate on the basis of race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, veteran status, or any other legally protected status.



Arbol participates in the E-Verify program to confirm employment eligibility.




Please mention the word **EVENTFUL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Engineering Intern
  • RefinedScience
  • Remote
python students support software

Data Engineering Intern

At RefinedScience, our mission is to advance care by bringing together the best science, data and minds – disease by disease, patient by patient, cell by cell to discover pathways to life beyond disease.   

WHAT WE ARE LOOKING FOR

We are seeking a motivated Data Engineering Intern to join our team. This internship is open to undergraduate and graduate students who are interested in building data infrastructure that supports advanced analytics, data science, and AI-driven insights in healthcare and life sciences.

You will work closely with data scientists, bioinformaticians, and engineers to help design, build, and improve data pipelines and platforms that power RefinedScience's research and analytics initiatives.

KEY ACTIVITIES

  • Assist in building and maintaining data pipelines for ingesting, transforming, and validating clinical, biological, and real-world data
  • Support integration of data from multiple sources (e.g., clinical data, analytics outputs, external datasets)
  • Help develop and optimize ETL/ELT workflows to ensure data quality and reliability
  • Collaborate with data science and bioinformatics teams to support analytics and machine learning workflows
  • Contribute to data modeling, documentation, and best practices for data infrastructure
  • Participate in code reviews, testing, and performance improvements
  • Participate in Quality Reviews and Troubleshooting
  • Communicate progress and findings to cross-functional teams

MUST HAVES

  • Currently enrolled in a Bachelor's, Master's, or Ph.D. program in Data Engineering, Computer Science, Data Science, Software Engineering, or a related field
  • Experience with Python and/or SQL through coursework, projects, or internships
  • Basic understanding of data pipelines, databases, and data transformation concepts
  • Familiarity with version control (e.g., Git)
  • Strong analytical thinking and problem-solving skills
  • Ability to learn quickly and work collaboratively in a team envir

    Please mention the word **LOGICAL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Junior Data Engineer
  • Satelligence
  • Utrecht
design python django technical

At Satelligence we're looking for a Jr. Data Engineer to join our team.

We are looking for a Junior Data Engineer:

Employment type: 32–40h/week

Location: Utrecht, NL (hybrid)

Experience: Junior–Medior level

Salary: €48 000 – €60 000 gross/year (including 8% holiday allowance, based on 40h/week)

About the job

As Data Engineer your main responsibilities are on building out capabilities of our (geo)data query engine. You’ll be part of the data engineering team, which develops and maintains our satellite data processing engine, geospatial storage and query engine and a set of internal tools used mainly by our OPS team. Our tech stack is Python, Django, PostGIS, deployed on Google Cloud services like GKE and cloud functions. This role will report to Engineering Lead.


What will you do?

You'll be instrumental in empowering our product teams to develop and deploy features that help our clients reach their sustainability targets. You'll ensure the reliability, scalability, and performance of our cloud-based data platform, enabling us to deliver critical environmental intelligence through our API. Your work will directly contribute to:

  • Building and maintaining scalable infrastructure on GCP using infrastructure-as-code tools like Terraform

  • Optimizing data pipelines for processing and storing massive datasets (ETL, OLAP)

  • Developing and managing APIs for efficient data dissemination.

  • Implementing data engineering best practices for data quality, security, and performance.

  • Collaborating closely with product teams to understand their needs and provide technical guidance.

  • Contributing to the design and implementation of data storage solutions using databases like PostgreSQL

  • Monitoring and troubleshooting platform performance and ensuring high availability.


    About you

    • You are an experienced Python developer

    • You are experienced with RDBMS, especially postgresql

    • You are familiar with Django

    • You prefer a well organized codebase over getting your pull requests merged fast

      Nice to have

      • You are experienced with Infrastructure as Code tools such as Terraform

      • You have experience with Google Cloud (Cloud SQL, Cloud Composer, Kubernetes)

      • You worked with PostGIS before or bring other experience with geospatial data


        What we offer you:

        📍Office centrally located in Utrecht city (with direct access via bus 8 or a 20-minute walk from Utrecht Central Station)
        😎27 holidays (based on full-time employment)
        👐Solid pension scheme with employer contribution
        🚆NS Business Card for employees commuting from outside Utrecht
        🖥️Laptop and necessary IT equipment provided
        🩺Additional income protection in case of long-term illness or disability, complementing the statutory coverage
        🥪Daily lunch, fruits, and Aroma Club coffee at the office
        🍹Not the main reason to join, but definitely a fun one: Annual Team Week, after-summer drinks with friends and family and a festive Christmas celebration.

        Meet Satelligence!
        Satelligence is the market leader in remote sensing technology for sustainable sourcing with the mission to halt deforestation. We provide traders, manufacturers and agribusinesses such as Mondelez, Bunge, Cargill, Unilever, Rabobank with critical sustainability insights empowering them to minimize their global environmental footprint and track their progress against climate objectives, ensuring a sustainable supply chain. We were founded in 2016 and currently employ +40 people, working in Utrecht and several locations in Asia, Africa, and South America.

        Apply for the job

        Do you want to join our team as our new junior Data Engineer? Then we'd love to hear about you!


        Please mention the word **FAIR** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Software Engineer
  • Ren
  • Remote
software design python training

 

Job Title:

Sr Software Engineer

Department:

Product Engineering

 

Position Description:

The Sr Software Engineer will be working with other engineers, architects, and product managers to develop software on our philanthropic solutions software platform. This person must be self-motivated and results-oriented with strong programming skills across modern enterprise software architectures. The Sr Software Engineer is expected to work well in an agile development environment to mentor and develop those around them and build superior products.

 

Duties & Responsibilities:

  • Write and maintain scripts written in Python for data engineer and machine learning pipelines.
  • Modification of database objects using SQL (stored procedures, views, tables etc.)
  • Write Automated Unit, Integration, and UI-level Tests to increase code quality and lower defect rate.
  • Provide technical guidance, mentorship while providing technical and design feedback leveraging code and peer reviews across the full application stack.
  • Collaborate and pair with other software and data engineers and product professionals to design, implement and test new features and product refinements.
  • Refactor existing code to improve maintainability and quality.
  • Author and present training materials and documentation to other team members and users of software
  • Work closely with Product Management and other areas of the business to ensure market needs are met.
  • Work with Architecture team to design and implement new service-based, automated application environment.


Please mention the word **CHERISHED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Product Data Analyst
  • Big Health
  • Remote - US
analyst python supervisor support

Our Mission

At Big Health, our mission is to help millions back to good mental health by providing fully digital, non-drug options for the most common mental health conditions. Our FDA-clear digital therapeutics—SleepioRx for insomnia and DaylightRx for anxiety—guide patients through first-line recommended, evidence-based cognitive and behavioral therapy anytime, anywhere. Our digital program, Spark Direct, helps to reduce the impact of persistent depressive symptoms. 


In pursuit of our mission, we’ve pioneered the first at-scale digital therapeutic business model in partnership with some of the most prominent global healthcare organizations, including leading Fortune 500 healthcare companies and Scotland’s NHS. Through product innovation, robust clinical evaluation, and a commitment to equity at scale, we are designing the next generation of medicine and the future of mental health care. 


Our Vision

Over the next 5-10 years, we believe digital therapeutics will transform the delivery of healthcare worldwide by providing access to safe and effective evidence-based treatments. Big Health is positioned to take the lead in this transformation.


Big Health is a remote-first company, and this role can be based anywhere in the US.


Join Us

We're seeking a Product Data Analyst contractor to drive data-informed product decisions by improving our data democratization, analyzing data, generating insights, and generating reports. You'll partner closely with product, growth, enrollment marketing, and client implementation teams to understand user behavior, measure product performance, and identify opportunities for growth and improvement. 

\n


Key Responsibilities
  • Use SQL to query data in Snowflake.
  • Update Snowflake data models, consistent with current data architecture. 
  • Use LookML to add new dimensions, measures, table calculations, and explores to Looker .
  • Create dashboards in Looker and Post Hog to support growth, enrollment marketing, client implementation, product initiatives, and/or company OKRs. 
  • Conduct deep-dive analyses using data from Snowflake and Looker to understand user behavior patterns, identify friction points in the user journey, and uncover opportunities for product enhancement. Analyses may include, but are not limited to, descriptive analytics, correlation, regression, and between-group analyses. 
  • Present the results of these analyses to a cross-functional audience, translating complex data findings into actionable recommendations.
  • Build externally-facing reports that provide stakeholders with clear visibility into user engagement, and feature adoption, clinical outcomes, and recommendations for optimal product use. 
  • Provide data to help justify and inform decision-making around A/B tests and experiments to validate product hypotheses and measure the impact of new features or changes. 
  • Use DBT to build data models and add new data sources to Snowflake. 
  • Assist with updating data dictionary and ERD. 
  • Communicate proactively. During onboarding, you will meet 3-5x/week with your supervisor to provide updates on ticket status and to ask questions. Asking questions outside of these meetings is expected and welcomed. 
  • Work with your supervisor and relevant stakeholders to proactively discuss requirements when questions arise. 


Required Qualifications
  • 3+ years of experience in product analytics, data analysis, or a related analytical role, preferably in a product-driven technology company
  • Strong SQL skills and experience working with large datasets in modern data warehouses like Snowflake, BigQuery, or Redshift
  • Experience with dbt or similar data transformation tools for building modular, tested, and documented data models
  • Proficiency in version control systems like Git for managing code and collaborating with data and engineering teams 
  • Proficiency in analytics tools such as Python or R for statistical analysis and data manipulation
  • Familiarity with BI visualization tools like Looker, Tableau, or Mode
  • Basic understanding of data pipeline orchestration and workflow management tools such as Airflow or similar. Familiarity with ELT/ETL processes and data integration tools like Fivetran, Stitch, or custom-built pipelines 
  • Solid understanding of statistical concepts including hypothesis testing, regression analysis, and experimental design. Experience designing and analyzing A/B tests with proper statistical rigor 
  • Familiarity with healthcare concepts and terminology are highly desirable 
  • Strong communication skills


Background and Life at Big Health
  • Backed by leading venture capital firms.
  • Big Health’s products are used by large multinational employers and major health plans to help improve sleep and mental health. Our digital therapeutics are available to more than 62 million Medicare beneficiaries.
  • Surround yourself with the smartest, most enthusiastic, and most dedicated people you'll ever meet—people who listen well, learn from their mistakes, and when things go wrong, generously pull together to help each other out. Having a bigger heart and a small ego are central to our values.


\n
$50 - $80 an hour
The hourly rate range for this contractor position is $50.00 - $80.00 per hour. This range reflects the target hourly rate for the engagement and may vary based on experience, scope of work, location, and engagement structure. The hourly rate is the sole and full compensation provided for this contractor position.

Rates are determined by role requirements, level, and market factors. The range displayed reflects the minimum and maximum target hourly rates for this engagement. Final rates are determined based on relevant skills, experience, availability, and the specific terms of the engagement. Compensation for contractors does not include benefits, paid time off, or other employee benefits and is subject to change based on business needs.
\n

We at Big Health are on a mission to bring millions back to good mental health, in order to do so, we need to reflect the diversity of those we intend to serve. We’re an equal opportunity employer dedicated to building a culturally and experientially diverse team that leads with empathy and respect. Additionally, we will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.


Big Health participates in E-Verify for all new hires in the United States.



Please mention the word **NIMBLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Analyst
  • Restaurant365
  • Remote
analyst saas python technical

Restaurant365 is a SaaS company disrupting the restaurant industry! Our cloud-based platform provides a unique, centralized solution for accounting and back-office operations for restaurants. Restaurant365’s culture is focused on empowering team members to produce top-notch results while elevating their skills. We’re constantly evolving and improving to make sure we are and always will be “Best in Class” ... and we want that for you too!


Restaurant365 is seeking a Data Analyst to join our Enterprise Data Analytics team. This role supports business teams across the organization by helping turn data into insights that inform day-to-day decisions and longer-term planning.


As a Data Analyst, you will partner with stakeholders to understand business questions, support reporting needs, and help maintain dashboards and KPIs. You’ll work within established data models and governance practices while continuing to build your technical and business analysis skills. This role is ideal for someone who enjoys working with data, learning the business, and growing into a strong analytics partner over time.

\n


How you'll add value:
  • Analytics & Reporting
· Analyze operational, customer, financial, and usage data to support business reporting and ad hoc analysis.
· Help maintain and monitor KPIs that track business performance and operational health.
· Build, update, and maintain dashboards and reports in Domo for business stakeholders.
· Assist with trend analysis, performance monitoring, and identifying areas for improvement.
· Support forecasting, planning, and recurring reporting processes under guidance from senior analysts or managers.
  • Business Partnership
· Work with business stakeholders to understand reporting needs and translate questions into clear analytics requests.
· Help define basic success metrics and KPIs for initiatives and projects.
· Provide clear, well-documented analyses that support business decision-making.
· Participate in requirement gathering sessions and stakeholder check-ins.
  • Collaboration & Enablement
· Partner with other analysts, analytical engineers, and data engineers to ensure accurate and consistent reporting.
· Follow established data governance and quality standards for dashboards and reports.
· Support documentation of metrics definitions, dashboards, and reporting logic.
· Learn to present insights in a clear, concise way to both technical and non-technical audiences.


What you'll need to be successful in this role:
  • 2–4 years of experience in data analytics, business analytics, or a related role.
  • Experience working in a SaaS, technology, or data-driven environment is a plus.
  • Working knowledge of SQL for querying and analyzing data.
  • Experience using BI tools (Domo preferred, but others acceptable).
  • Familiarity with Excel or Google Sheets for analysis and validation.
  • Exposure to Python or R is a plus but not required.
  • Ability to analyze datasets, identify trends, and summarize findings clearly.
  • Basic understanding of common business metrics (revenue, retention, adoption, operational efficiency).
  • Comfort working with defined KPIs and reporting frameworks.
  • Clear written and verbal communication skills.
  • Ability to explain analysis results in a straightforward, business-friendly way.
  • Willingness to learn, ask questions, and incorporate feedback.
  • Ability to work effectively with cross-functional partners.
NICE TO HAVE
  • Exposure to Snowflake, dbt, or modern cloud data platforms.
  • Experience supporting recurring business reporting or executive dashboards.
  • Familiarity with basic project tracking or Agile concepts.
  • Interest in growing toward advanced analytics, analytics engineering, or business analytics leadership.


R365 Team Member Benefits & Compensation
  • This position has a salary range of $87,083.33-$121,916.67 per year. The above range represents the expected salary range for this position. The actual salary may vary based upon several factors, including, but not limited to, relevant skills/experience, time in the role, business line, and geographic location. Restaurant365 focuses on equitable pay for our team and aims for transparency with our pay practices.
  • Comprehensive medical benefits, 100% paid for employee
  • 401k + matching
  • Equity Option Grant
  • Unlimited PTO + Company holidays
  • Wellness initiatives

#BI-Remote


\n
$87,083.33 - $121,916.67 a year
\n

DYN365, Inc d/b/a Restaurant365 is an equal opportunity employer.



Please mention the word **FTW** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
python senior engineering

Somos um dos maiores bancos privados do Brasil, conforme o ranking do Banco Central. E temos muito orgulho em dizer que, pelo segundo ano consecutivo, fomos reconhecidos como a melhor instituição financeira para trabalhar no Brasil, segundo o ranking da GPTW 2025! Também recebemos o selo de Diversidade na categoria Mulher, reforçando nosso compromisso com a equidade.  


Nossa cultura acontece de verdade: sendo simples, corretos, parceiros e corajosos. Valorizamos as relações, a inovação e um ambiente leve, cada vez mais colaborativo e com intencionalidade no avanço da diversidade e inclusão.


Estamos em constante evolução e construímos #parcerias de sucesso para entregarmos nosso propósito de tornar mais tranquila a vida financeira de pessoas e empresas


Se identificou? Então venha trabalhar com a gente! 

\n


Dá uma olhada nos desafios que te esperam:
  • Estamos buscando uma pessoa Engenheira de Machine Learning Senior para atuar na evolucao da nossa plataforma de Machine Learning e garantir que os modelos utilizados em diversas areas do banco operem com alta qualidade governanca e escalabilidade;
  • Análise das ferramentas internas com olhar critico e espaço para trazer melhorias, atuando com papel consultivo;
  • Cuidará da observabilidade dos modelos de ML, sugerindo metricas para monitoramento mais eficiente;
  • Análise da qualidade de código de implantação;
  • Ser ponto de referência das plataformas utilizadas internamente.


E aí, se identificou? Agora gostaríamos de saber se você tem o perfil e os conhecimentos abaixo:
  • Experiência sólida em engenharia de ML, MLOps ou Data Engineering aplicada a modelos em produção;
  • Forte domínio de Python e bibliotecas de ML/ciência de dados;
  • Experiência com plataformas distribuídas, preferencialmente Databricks/Spark.


\n

Diversidade e inclusão 


O BV atua intencionalmente em prol da aceleração da equidade e representatividade no mercado financeiro, respeitando e apoiando a diversidade em toda sua pluralidade e interseccionalidade, garantindo uma transformação social positiva. 

 

Por isso, convidamos pessoas negras, mulheres, profissionais com deficiência, comunidade LGBTQIA+ e pessoas de qualquer idade a conhecerem a gente um pouco mais e a se inscreverem nesta vaga. 



Please mention the word **PROSPEROUS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Analyst 3
  • SkySlope
  • Remote
analyst salesforce python technical

OUR ORIGIN STORY 🎂


In 2011 SkySlope started as an idea born at the kitchen table of our CEO, with just him and two others. Headquartered in Sacramento, California, we have since grown out of our previous 3 offices and many of our close to 150 employees are spread all across the United States. Those 150 employees support close to 300,000 users across 5,000 offices nationwide and now in Canada as well. Included in that is 8 out of the 15 largest Real Estate Brokerages in the nation.


But, despite being happy with what we’ve achieved we know that as industry leaders in our space there’s a lot of work left to be done. All of the growth and success that has happened is a result of us obsessing over building cutting edge software that makes the Real Estate world a better place. We know this only happens by hiring people who don’t just come up with out of the box ideas but hiring people who actually see those ideas through and bring them to life. As we’ve grown, we’ve been fortunate enough to hire plenty of people who possess that quality and realize it’s equally important to hire people who can pair that skill with empathy, collaboration, and a keen sense of urgency. If you’re looking to join a company where you can have real impact and surround yourself with an incredible team of people then look no further.

                                                                                                                                                                                                                


SKYSLOPE’S CORE VALUES 💪🏻


These are the principles that helped us get to where we are and they are the principles that will guide us to where we want to go in the future. You can apply them to your professional life, your personal life, to any business and any situation. In no specific hierarchy, our core values are:


Awareness | Execution | Obsession | Ownership | Humility | Radical Candor | Urgency | Greatness | Inches I Fun


Learn more about our core values from our CEO, Tyler Smith here!

                                                                                                                                                                                                                


About the role: We are looking for a Data Analyst III to join our team and to help elevate the way we leverage data across the organization. While this role includes traditional data retrieval and reporting, we're looking for someone who goes beyond fulfilling requests — someone who proactively identifies trends, surfaces insights, and brings forward recommendations that help teams make better decisions before they even know to ask. Experience or curiosity around AI-assisted analytics is a plus, but this is first and foremost a strong data analyst role.

\n


What Sets You Apart
  • You don't wait to be asked. You dig into the data, find what matters, and bring it to the people who need it. You're curious about new tools and techniques — including AI — but you're grounded in strong analytical fundamentals. You care about getting the answer right and communicating it in a way that actually moves the needle.


Essential Functions
  • Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
  • Query, extract, and transform data from multiple sources across MS SQL Server, MySQL, and MongoDB environments to support business needs
  • Build and maintain automated reports, dashboards, and data pipelines that reduce manual effort and improve data accessibility
  • Partner with cross-functional teams to understand their goals and proactively deliver analytical insights that drive action
  • Identify patterns, trends, anomalies, and opportunities in data sets and communicate findings clearly to both technical and non-technical audiences
  • Develop and maintain Python scripts for data automation, transformation, reporting and analysis
  • Contribute to improving our data infrastructure, documentation, and analytical best practices
  • Explore opportunities to incorporate AI-powered tools and techniques into existing workflows where they add clear value


Other Duties
  • Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.


Requirements
  • 5+ years of experience in a data analyst or similar role with progressive responsibility
  • Advanced SQL proficiency across both MS SQL Server and MySQL, including complex joins, stored procedures, query optimization, and cross-database work
  • Python proficiency for scripting, data manipulation, and automation (pandas, NumPy, or similar libraries)
  • Experience with BI/visualization tools such as Tableau, Power BI, Looker, or similar platforms
  • Solid understanding of data warehousing concepts, data modeling, and ETL/ELT processes
  • Strong communication skills with the ability to translate analytical findings into clear, actionable recommendations for stakeholders
  • Self-directed mindset with a demonstrated history of going beyond ad-hoc requests to proactively surface insights and improve processes


Preferred Qualifications
  • Familiarity with cloud platforms (Azure, AWS, or GCP)
  • Exposure to machine learning concepts or AI-assisted analytics tools (e.g., using APIs for text analysis, summarization, or data enrichment)
  • Experience with A/B testing, statistical modeling, or causal inference
  • Knowledge of version control (Git) and collaborative development workflows
  • Statistics, data science, or related degree or certification (equivalent experience welcomed)
  • MongoDB experience, including aggregation pipelines and working with unstructured or semi-structured dataExperience with data orchestration or transformation tools such as dbt, Apache Airflow, or similar
  • Familiarity with product and web analytics platforms such as Heap and/or Google Analytics
  • Exposure to tools such as Chameleon, HubSpot, or Salesforce is a bonus but not required
  • Real estate industry knowledge and/or experience
  • Experience mentoring junior analysts or leading small-scale analytical projects


\n
$100,000 - $120,000 a year
\n

Medical Insurance – Company pays flat dollar amount towards premium 

There are 3 plan options 

Our Medical Insurance plans are provided through United Healthcare 

The United Healthcare HMO is only offered to California residents

Eligibility begins 1st of the month following date of hire

Per Paycheck (24 pay periods a year)

Employee costs per tier are as follows:


UHC HDHP/HSA

Employee Only  $58.92

Employee + Child $147.30

Employee + Spouse $175.78

Employee + Family $259.24


UHC PPO

Employee Only $104.10

Employee + Child $244.63

Employee + Spouse $289.91

Employee + Family $422.63


UHC HMO (CA residents only)

Employee Only $84.56

Employee + Child $198.71

Employee + Spouse $235.49

Employee + Family $343.29


Dental Insurance – Company pays 75% of monthly premium only on Base Plan

This PPO plan is administered through Principal

Eligibility begins 1st of the month following date of hire


Principal Dental Base Plan

Employee Only $4.19

Employee + Child $11.73

Employee + Spouse $8.50

Employee + Family $17.20


Principal Dental Buy-Up Plan

Employee Only $6.65

Employee + Child $19.53

Employee + Spouse $13.51

Employee + Family $28.35


Vision Insurance – Company pays 100% of monthly premium

This plan is administered through Principal (VSP choice network)

Eligibility begins 1st of the month following date of hire


Basic Life and AD&D Insurance (with additional Voluntary Plans available) – Company paid plan with a guarantee issue amount of $25,000. 

Plan is administered through Principal

Eligibility begins 1st of the month following date of hire

Pricing varies for additional coverage, based upon age, coverage and dependent classification


Voluntary Short & Long Term Disability Insurance Plans – Optional plans to help protect your financial well-being.

Plan is administered through Principal

Eligibility begins 1st of the month following date of hire

Pricing varies, based upon age


Voluntary Accident insurance- Optional plans available to purchase that pays you a cash benefit to help with your expenses if you or a covered family member is injured due to an accident. 

Employee Only $4.39

Employee + Spouse $6.73

Employee + Child(ren) $7.49

Employee + Family $11.50


Voluntary Hospital Indemnity- Optional plans available to purchase that pays you a cash benefit to help with your expenses if you or a covered family member is admitted to the hospital

Employee Only $6.85

Employee + Spouse $17.43

Employee + Child(ren) $11.41

Employee + Family $22.84


Voluntary Critical Illness- Optional plans available to purchase to help with your expenses if you or a covered family member is diagnosed with a covered critical illness. 

Pricing varies, based upon age


Flexible Spending Account – A tax savings account you put money into that you use to pay for certain out-of-pocket health care and dependent care costs.

Plan is administered through Discovery Benefits

Eligibility begins 1st of the month following date of hire, if you sign up by the 25th of the month


Health Savings Account (HSA)– A tax savings account for employees enrolled in a High Deductible Health Plan. You can put money into this account to pay for certain out-of-pocket health care costs

Plan is administered through Discovery Benefits

Eligibility begins 1st of the month following date of hire, if you sign up by the 25th of the month

Must be enrolled in the UHC HDHP/HSA medical plan with SkySlope to be eligible

SkySlope contributes $300 to an individual HSA and $600 to a family HSA


401(k) Plan – Company will match $0.50 on each $1.00 contributed up to the first 6% of eligible earnings

Plan is administered through Principal

Eligibility begins first pay date after 90 days of employment

Auto-enrollment after eligibility at 3% of gross annual earnings

Defer between 1% and 40% of eligible contribution


Employee Stock Purchase Plan - Company match equal to 33.3333% of dollars contributed to the plan, based upon the average purchase price for the quarter.

Plan administered through Fidelity 

Eligibility begins first pay date after 90 days of employment

May contribute after-tax dollars from 3% to 15% of base earnings


Paid Time Off (PTO) – Company provides 120 hours (equivalent of 15 days) of PTO for new hires

PTO accrual begins after 90 days of employment


16 Paid Holidays

11 observed, 5 floating (used for personal holidays)

List of observed holidays published annually

Eligibility begins on your first day of employment


Bereavement Leave – Company will provide you with the following off to grieve the loss of a loved one. 

5 paid days of leave for an immediate family member. This is a spouse, child, parent, grandparent. 

1 paid day of leave for a close non-family member.


Discounts through Fidelity - Purchasing discounts for wireless, car rentals, hotels and more…


Pet Insurance through Nationwide- 50%, 70% reimbursement plans available through Nationwide with options for wellness. SkySlope contributes $20 a month, per pet, up to 2 pets towards the cost of the plan


Paid Parental Leave - All full-time regular employees are eligible for SkySlope’s Paid Parental Leave program, which provides employees with up to six (6) weeks of pay following the birth or placement of a new child. Paid Parental Leave must be taken within the first 6 months of the birth or placement of a new child. Employees will be paid at their regular rate of pay based upon their normal work schedule, up to a maximum of forty (40) hours per week.


Dayforce Wallet- All full-time regular employees will have access to sign up for Dayforce Wallet. Dayforce Wallet is a program provided by our payroll provider that allows employees to access their pay on-demand as soon as it is earned, without waiting for their standard payday.


Waldorf University discounts and perks- 10% off tuition for employees and their families, free text books, and scholarship opportunities available


Child Literacy Assistance Program discount- Discounted annual membership to Luminous Minds, an online resource center created to help with child literacy struggles. $85 for 1 year membership as a SkySlope Employee.


$1,000 Employee Referral bonuses- SkySlope will give every referrer $1,000 (post-tax) after a referee passes their 90 day mark. 


In addition to the above you also receive other perks like our Annual Employee Appreciation Day and additional internal company events.


                                                                                                                                                                                                                


SkySlope, is an Equal Opportunity employer. All qualified applicants will receive

consideration for employment without regard to race, color, religion, sex, age, disability, protected veteran status,

national origin, sexual orientation, gender identity or expression (including transgender status), genetic

information or any other characteristic protected by applicable law.


We sincerely thank you for taking the time to review our open positions and hope you'll take the time to submit a concise and thoughtful application.


Still thinking about applying? Waiting to hear back from us? Check out our social media in the meantime!

SkySlope | Facebook | Instagram | YouTube | LinkedIn | Twitter


Your privacy is important to us. Learn more about what data is collected and how we use it here.





Please mention the word **PROMINENT** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Big Data Engineer
  • Oowlish Technology
  • Remote
python support software growth

Join Our Team


Oowlish, one of Latin America's rapidly expanding software development companies, is seeking experienced technology professionals to enhance our diverse and vibrant team.


As a valued member of Oowlish, you will collaborate with premier clients from the United States and Europe, contributing to pioneering digital solutions. Our commitment to creating a nurturing work environment is recognized by our certification as a Great Place to Work, where you will have opportunities for professional development, growth, and a chance to make a significant international impact.


We offer the convenience of remote work, allowing you to craft a work-life balance that suits your personal and professional needs. We're looking for candidates who are passionate about technology, proficient in English, and excited to engage in remote collaboration for a worldwide presence.


About the Role:


We are seeking a hands-on Big Data Engineer to support and enhance an AWS-based data platform, focusing on pipeline reliability, scalable processing, and performance optimization. This role requires strong Python expertise, deep familiarity with AWS data services, and the ability to maintain production-grade data workflows.


You will work on event-driven pipelines, contribute to CI/CD improvements, and collaborate on platform reliability initiatives. This role is ideal for someone who enjoys building and maintaining data infrastructure, optimizing large-scale data processing systems, and working in cloud-native environments.


This is a 6-month engagement, aligned to ET time zone.

\n


Key Responsibilities:
  • Develop and maintain data processing logic using Python
  • Build, optimize, and support data pipelines using AWS Glue and Lambda
  • Write and optimize complex SQL queries for analytics and operational workloads
  • Support platform reliability and pipeline monitoring
  • Contribute to CI/CD processes using GitHub and GitHub Actions
  • Collaborate on infrastructure improvements using Infrastructure-as-Code principles
  • Troubleshoot and resolve pipeline failures and performance issues
  • Support data consumption layers used by BI tools


Must Have:
  • 4+ years of experience as a Data Engineer / Big Data Engineer
  • Strong hands-on Python experience (data processing and application logic)
  • Advanced SQL skills (query optimization, performance tuning)
  • Production experience with AWS Lambda and AWS Glue
  • Experience working with CI/CD tools (GitHub, GitHub Actions)
  • Familiarity with Snowflake and/or Aurora
  • Understanding of Infrastructure-as-Code (IaC) concepts
  • Comfortable working in the ET time zone


Nice to Have:
  • Experience with BI tools (Sigma preferred)
  • Experience with event-driven architectures
  • Exposure to enterprise-scale data platforms


\n


Benefits & Perks:


Home office;

Competitive compensation based on experience;

Career plans to allow for extensive growth in the company;

International Projects;

Oowlish English Program (Technical and Conversational);

Oowlish Fitness with Total Pass;

Games and Competitions;



You can also apply here:


Website: https://www.oowlish.com/work-with-us/

LinkedIn: https://www.linkedin.com/company/oowlish/jobs/

Instagram: https://www.instagram.com/oowlishtechnology/





Please mention the word **AWESOMENESS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Oowlish Technology
  • Remote
python support software growth

Join Our Team


Oowlish, one of Latin America's rapidly expanding software development companies, is seeking experienced technology professionals to enhance our diverse and vibrant team.


As a valued member of Oowlish, you will collaborate with premier clients from the United States and Europe, contributing to pioneering digital solutions. Our commitment to creating a nurturing work environment is recognized by our certification as a Great Place to Work, where you will have opportunities for professional development, growth, and a chance to make a significant international impact.


We offer the convenience of remote work, allowing you to craft a work-life balance that suits your personal and professional needs. We're looking for candidates who are passionate about technology, proficient in English, and excited to engage in remote collaboration for a worldwide presence.


About the Role:


We are seeking a Senior Data Engineer with strong expertise in enterprise data modeling and AWS-based data platforms to support a mature and evolving data ecosystem. This role requires hands-on experience working with large-scale data environments, optimizing data models, and maintaining event-driven pipelines in a cloud-native architecture.


You will work across data modeling, pipeline development, API data support, and infrastructure collaboration. This position is ideal for someone comfortable operating in enterprise environments, maintaining production-grade systems, and improving performance and scalability across a modern AWS data stack.


This is a 6-month engagement with ET time zone alignment required.

\n


Must-Have:
  • 6+ years of experience in Data Engineering
  • Strong experience with Snowflake and Aurora Postgres
  • Advanced SQL and data modeling expertise (logical & physical design)
  • Hands-on experience with AWS data services (Glue, Lambda, DMS, EventBridge)
  • Strong Python experience for data pipelines
  • Experience supporting enterprise-scale data platforms
  • Experience with CI/CD (GitHub Actions)
  • Comfortable working in the ET time zone


Nice to Have:
  • Experience working with Terraform
  • Exposure to artifact management and infrastructure-as-code best practices
  • Experience in performance tuning at scale
  • Experience implementing automated data quality frameworks
  • Prior experience in enterprise or large distributed systems


\n


Benefits & Perks:


Home office;

Competitive compensation based on experience;

Career plans to allow for extensive growth in the company;

International Projects;

Oowlish English Program (Technical and Conversational);

Oowlish Fitness with Total Pass;

Games and Competitions;



You can also apply here:


Website: https://www.oowlish.com/work-with-us/

LinkedIn: https://www.linkedin.com/company/oowlish/jobs/

Instagram: https://www.instagram.com/oowlishtechnology/





Please mention the word **RECTIFYING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Software Engineer
  • itD Tech
  • Arizona
software design python training
itD is seeking a Software Engineer to design and scale the data pipelines that power next-generation foundation models for machine-generated data, including time series, logs, and large-scale event streams. This role contributes directly to the success of model training and production systems by enabling reliable, high-performance data infrastructure at scale. The ideal candidate will bring deep experience in distributed systems and data engineering, along with a proven track record of delivering scalable, production-ready data pipelines that support machine learning workflows. Location: Remote (U.S.-based; time zone alignment with Pacific or Central preferred) We provide comprehensive medical benefits, a 401(k) plan, paid holidays, and more. Please note that we are only considering direct W2 candidates at this time, as we are unable to offer sponsorship. Responsibilities: • Build and scale distributed data pipelines for large-scale time series, log data, and high-volume event streams. • Design and maintain reliable, high-performance Spark and Python workflows to support model training datasets. • Analyze and resolve performance bottlenecks related to latency, memory utilization, data skew, and throughput. • Improve data quality, validation processes, and reproducibility for machine learning workloads. • Partner with machine learning engineers and researchers to

Please mention the word **UNDAUNTED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Data Analyst II
  • ComputerCare
  • Remote
analyst system python technical

ComputerCare has spent more than 20 years building something rare in the IT world: a company where technical excellence and genuine human connection are valued equally. We're the trusted partner that IT leaders turn to when technology can't afford to fail. As a woman-owned business serving innovative companies worldwide, we combine certified technical expertise with a human approach. Whether it's managing complex device lifecycles for global teams or performing authorized repairs for Apple, Lenovo, HP and Dell devices, our work directly impacts how thousands of people stay productive every day. We never outsource our work because we believe in accountability, quality, and building lasting relationships—with our clients and as a team.


If you're passionate about technology, take pride in solving real problems, and want to be part of a company that values both technical excellence and the people behind it, ComputerCare is where you belong.


Come join us in our mission of being the Human Side of Hardware! 


We’re looking for a Data Analyst II to serve as a key point of contact and subject matter expert for data-related requests and system updates. You’ll analyze, extract, and interpret data from multiple systems, including SQL databases and reporting tools, and implement data solutions that support business workflows and decision-making.


If you enjoy solving complex problems with data and making an impact, we want you on our team!

\n


What You'll Do:
  • Assist in designing and structuring database architecture to support scalable data storage, efficient querying, and optimized performance.
  • Demonstrate understanding of relational databases, including tables, schemas, indexing, normalization, and relationships.
  • Help build and maintain data pipelines to move and transform data between systems while ensuring accuracy and reliability.
  • Create dashboards, reports, and visualizations using SQL, Excel, Tableau, Power BI, or Looker Studio to communicate findings clearly to stakeholders.
  • Analyze large datasets to identify trends, patterns, correlations, and actionable insights that support business decisions.
  • Collect, organize, and maintain data from multiple sources while ensuring data integrity and accuracy.
  • Write, maintain, and optimize SQL queries for reporting, analysis, and data extraction.
  • Clean, preprocess, and transform raw data using SQL and Python to prepare it for analysis and reporting.
  • Work with cross-functional teams to understand business requirements, define KPIs, and translate them into analytical solutions.
  • Identify inefficiencies in data processes and implement automation using SQL, Python, or ETL tools to improve workflow and data quality.


What You'll Bring:
  • Bachelor’s degree in Computer Science, Information Systems, Statistics, Mathematics, or a related field.
  • 2–5 years of experience in data analysis, reporting, or database management.
  • Experience working with SQL databases and writing complex queries.
  • Experience with Python (pandas, NumPy) and other scripting languages for data manipulation.
  • Experience with data visualization tools (HEX, Tableau, Power BI, Excel dashboards).


Perks and Benefits:
  • Comprehensive Medical, Dental, and Vision plans to keep you feeling your best
  • 401(k) with employer match—because your future matters
  • Company-paid Life Insurance, plus HSA & FSA options
  • Employee Assistance Program (EAP) for real support when you need it
  • Adoption Assistance to help grow your family
  • Commuter Benefits for an easier ride
  • Free Coursera Professional Certifications to level up your skills
  • Generous vacation & sick time, plus paid time off to give back to your community


\n
$80,000 - $115,000 a year
\n

If you get to this point, we hope you're feeling excited about the job you just read. Even if you don't feel that you meet every single requirement, we still encourage you to apply. We're eager to meet people that believe in ComputerCare’s mission, core values and can contribute to our team in a variety of ways – not just candidates who check all the boxes. 


At ComputerCare, we welcome passionate individuals who have the unrestricted right to work in the United States, including natural citizens and Green Card holders.


ComputerCare is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.



Please mention the word **GORGEOUS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$75000 - $125000 Full time
Data Analyst
  • World Golf Tour (WGT)
  • San Francisco
analyst security python game

Role

World Golf Tour is seeking a Data Analyst to join our Product team. In this critical role, you will be the custodian of our data, organizing insights, and analyzing telemetry to support strategic business decisions. You will focus on developing and maintaining dashboards and analysis reports, collaborating across the studio and closely with the Product team to provide actionable insights that help drive the business. This role emphasizes strong data stewardship, visualization and statistical analysis.

Responsibilities

· Clean, validate, and prepare datasets for analysis, including resolving issues regarding missing, inconsistent, or novel data

· Perform exploratory data analysis to identify trends, patterns, and anomalies that inform business decisions

· Develop and maintain dashboards, reports, and visualizations using tools such as Amplitude, Power BI, or Excel

· Translate analytical findings into clear, actionable insights for both technical and non-technical stakeholders

· Partner with business teams (e.g., marketing, product, finance) to understand data needs and deliver relevant analyses

· Support ad hoc analysis and deep dives to answer specific business questions or identify opportunities

· Ensure compliance with data governance, privacy, and security standards

Experience and Skills

· Bachelor’s degree in Data Analytics, Statistics, Mathematics, Computer Science, Economics, or a related quantitative field

· 2–4 years of experience in a data analyst or similar role, preferably in game or software development

· Strong proficiency in SQL for data querying and manipulation

· Experience with data analysis tools/languages such as Python or R

· Advanced proficiency in Excel (e.g., pivot tables, formulas, data modeling)

· Experience with data visualization tools (e.g., Tableau, Power BI)

· Strong proficiency in statistical methodologies and data analysis

· Strong problem-solving and critical thinking skills

· Excellent communication skills, with the ability to present complex data in a clear and concise manner

Preferred Qualifications

· Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery)

· Familiarity with ETL processes and data pipeline development

· Knowledge of basic machine learning or predictive analytics techniques

· Experience working in game development

· Understanding of data governance and privacy regulations

· Experience in a fast-paced, cross-functional environment

About Us

World Golf Tour is a leader in online golf, delivering the most realistic and immersive virtual golf experience to players around the globe. We are best known for our core product WGT Golf, a free-to-play golf game that has set the standard for virtual golf since its launch in 2008. Renowned for its photorealistic recreations of iconic courses such as Pebble Beach, The Old Course at St Andrews, and Quail Hollow Club, the game combines authentic course imagery with precise swing mechanics and multiplayer competition to offer an experience trusted by millions.



Please mention the word **ENRAPTURE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Infrastructure Manager
  • Andromeda Cluster
  • San Francisco
manager training technical cloud

Infrastructure Manager

Location: North America Remote / San Francisco · Full-Time

About Andromeda

Andromeda Cluster was founded by Nat Friedman and Daniel Gross to give early-stage startups access to the kind of scaled AI infrastructure once reserved only for hyperscalers.

We began with a single managed cluster — but it filled almost instantly. Since then, we’ve been quietly building the systems, network, and orchestration layer that makes the world’s AI infrastructure more accessible.

Today, Andromeda works with leading AI labs, data centers, and cloud providers to deliver compute when and where it’s needed most. Our platform routes training and inference jobs across global supply, unlocking flexibility and efficiency in one of the fastest-growing markets on earth.

Our long-term vision is to build the liquidity layer for global AI compute. We are expanding to new frontiers to find the brightest that work in AI infrastructure, research and engineering.

The Opportunity
We're hiring a Infrastructure Manager to accelerate supply and demand matching on our platform. This is an Individual Contributor role reporting to the Head of Infrastructure.
The Infrastructure team sits at the core of our infrastructure. We're responsible for acquiring and facilitating compute resources across the company, working closely with compute providers, sales, and technical teams to match compute supply with demand.


Today we have already established the fundamental layer of capacity with providers. As we
scale, we are building the next layer—widening our network and liquidity, deepening the scope
of our services, and accelerating our growth.


What You'll Do
• Match incoming leads from our sales team with internal capacity and external capacity in
the market
• Maximize utilization of our compute resources
• Source and onboard new compute suppliers across the globe
• Source capacity based on customer needs and market trends
• Solve customer and supplier problems in a fast-moving, dynamic market
• Understand technical and commercial differences between suppliers to optimize our
capacity funnel
• Develop a proactive compute strategy informed by market intelligence
• Negotiate cost with suppliers and other vendors
• Create and implement processes around capacity planning


What We're Looking For
• 2+ years in cloud sales, GPUs, data centers, or a related field
• Existing network of contacts in the compute market (providers, brokers, or buyers)
• Deep understanding of the GPU compute market—what drives supply and demand
• Strong written and verbal communication across technical and commercial stakeholders
• Sound judgment in decisions that directly impact revenue and cost
• Comfortable operating in ambiguity
• Self-directed and energetic, able to operate autonomously while collaborating
cross-functionally
• Bias toward action in a fast-paced environment


Why You'll Love It Here

  • Impact: Be in a critical team unlocking revenue for the wider company

  • Real business: Meaningful revenue, complex transactions, and tangible impact

  • High-growth environment: Get in early at a company in a massive market

  • Ownership: Direct line to leadership and influence over how we scale

  • Competitive compensation + meaningful equity

  • Comprehensive benefits for you and your dependents, including healthcare, dental, and
    vision coverage, 401(k), and unlimited PTO


Andromeda Cluster is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.



Please mention the word **STRONGER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
ML Solutions Architect
  • Provectus
  • Remote
architect design system security

As an ML Solutions Architect, you'll be the technical bridge between clients and delivery teams. You'll lead pre-sales technical discussions, design ML architectures that solve business problems, and ensure solutions are feasible, scalable, and aligned with client needs. This is a highly client-facing role requiring both deep technical expertise and strong communication skills.

\n


Core Responsibilities:
  • 1. Pre-Sales and Solution Design (50%)
- Lead technical discovery sessions with prospective clients
- Understand client business problems and translate them into ML solutions
- Design end-to-end ML architectures and technical proposals
- Create compelling technical presentations and demonstrations
- Estimate project scope, timelines, cost, and resource requirements
- Support General Managers in winning new business

  • 2. Client-Facing Technical Leadership (30%)
- Serve as the primary technical point of contact for clients
- Manage technical stakeholder expectations
- Present technical solutions to both technical and non-technical audiences
- Navigate complex organizational dynamics and conflicting priorities
- Ensure client satisfaction throughout the project lifecycle
- Build long-term trusted advisor relationships

  • 3. Internal Collaboration and Handoff (20%)
- Collaborate with delivery teams to ensure smooth handoff
- Provide technical guidance during project execution
- Contribute to the development of reusable solution patterns
- Share learnings and best practices with ML practice
- Mentor engineers on client communication and solution design


Requirements:
  • 1. ML Architecture and Design
- Solution Design: Ability to architect end-to-end ML systems for diverse business problems
- ML Lifecycle: Deep understanding of the full ML lifecycle from data to deployment
- System Design: Experience designing scalable, production-grade ML architectures
- Trade-off Analysis: Ability to evaluate technical approaches (cost, performance, complexity)
- Feasibility Assessment: Quickly assess if ML is an appropriate solution for a problem
  • 2. ML Breadth
- Multiple ML Domains: Experience across various ML applications (RAG, Computer Vision, Time Series, Recommendation, etc.)
- LLM Solutions: Strong experience in architecting LLM-based applications
- Classical ML: Foundation in traditional ML algorithms and when to use them
- Deep Learning: Understanding of neural network architectures and applications
- MLOps: Knowledge of production ML infrastructure and DevOps practices
  • 3. Cloud and Infrastructure
- AWS Expertise: Advanced knowledge of AWS ML and data services
- Multi-Cloud Awareness: Understanding of Azure, GCP alternatives
- Serverless Architectures: Experience with Lambda, API Gateway, etc.
- Cost Optimization: Ability to design cost-effective solutions
- Security and Compliance: Understanding of data security, privacy, and compliance
  • 4. Data Architecture
- Data Pipelines: Understanding of ETL/ELT patterns and tools
- Data Storage: Knowledge of databases, data lakes, and warehouses
- Data Quality: Understanding of data validation and monitoring
- Real-time vs Batch: Ability to design for different data processing needs


\n

Please mention the word **TRUTHFULLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Full Stack Engineer
  • Darkroom
  • New York
react technical software code

What we’re building

We’re empowering small teams with technology that makes it easier to market and grow businesses. Our current focus it to help consumer brands shift from "workflow automation" to "agent management” within their marketing operations. Matter is the AI coordination layer — providing shared AI memory, centralized agent control, and model differentiation. We founded the company based on a decade of experience providing marketing services to 300+ consumer brands, leveraging that expertise to develop interfaces that streamline user experience in the era of AI.


Why join Matter?

  • Founding Engineer Equity You'll get a meaningful equity stake; early-stage and undiluted.

  • Product Ownership You'll ship production code daily and help steer key product and technical decisions.

  • Shape the Engineering Culture You'll influence how we work—tools, processes, standards, and hiring.

  • Work with Challenger Consumer Brands Talk directly to customers (CEOs, CMOs, VP's) of fast-growing consumer brands—some doing $80M–$500M in revenue.

Don't join Matter if...

  • Work-life balance is a high priority for you

  • You're uncomfortable changing your priorities every 24-48 hours

  • You're not confident in your abilities to manage end-to-end solutions

  • You require a many devops resources to be successful

About the Role

You'll sit squarely at the intersection of back‑end and front‑end, ensuring seamless integration between APIs, databases, UIs, and ML services. You'll design, build, and scale features end‑to‑end, especially our AI/ML‑powered experiences, while mentoring peers and driving architecture decisions.


Core Tech & Tools

  • Languages & Frameworks: Python, Node.js, React (TypeScript)

  • Datastore: PostgreSQL

  • Cloud & Infra: Google Cloud Platform, Airflow, Terraform, Docker, Kubernetes

  • ML/AI: LLMs, RAG, prompt engineering

  • Other: MCP

Key Responsibilities

  • Architect and implement full‑stack features, from database schema to React components, optimized for scale and reliability.

  • Build and maintain RESTful/GraphQL APIs, data pipelines, and distributed services in GCP.

  • Integrate, prompt, and debug LLMs and generative AI tools; own RAG or fine‑tuning pipelines.

  • Ensure front‑end and back‑end systems interoperate flawlessly, minimize friction, optimize data flow, and enforce contracts.

  • Collaborate with product, research, design, and infra teams to define requirements, iterate rapidly, and ship production‑grade code.

  • Monitor performance, reliability, and security.

  • Mentor junior engineers through code reviews, architecture reviews, and shared best practices.

Requirements

  • 5+ years of professional software engineering experience with end‑to‑end ownership in a full‑stack role.

  • Deep expertise in Python, Node.js, React/TypeScript, and PostgreSQL.

  • Able to be hands‑on with GCP, containerization (Docker/K8s), and building/supporting high‑traffic systems.

  • Proven experience integrating AI/ML models (LLMs, NLP, RAG) into production apps.

  • Familiarity or strong interest in working with MCP servers.

  • Exceptional problem‑solving skills and a product mindset: you think deeply about UX, performance, and business impact.

  • You sweat both technical details and end-user experience.

Nice to Haves

  • Experience with multi‑step or agentic AI workflows.

  • Background in AI infrastructure or tooling companies.

  • Contributions to open‑source AI/ML projects.

What we offer

  • Competitive salary and equity package (roles, responsibilities, and comp grow as we do)

  • Top-tier health, vision, dental insurance (US)

  • Regular team off-sites

  • Regular hack weeks



Please mention the word **EBULLIENCE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
react architect design saas

Distinguished Tech Innovator:

3Pillar warmly extends an invitation for you to join an elite team of visionaries. Beyond software development, we are dedicated to engineering solutions that challenge conventional norms. Envision you: steering projects that redefine urban living, establish new media channels for enterprise companies, or drive innovation in healthcare. 

Your invaluable expertise will serve as the cornerstone in shaping the future direction of our endeavors.


This role is the primary expert within a technology stack. The Architect owns the decision making around high-level design choices and dictates technical standards, including software coding standards, tools, and platforms.  The ideal candidate will thrive in a collaborative environment and be engaged in the development process. 

\n


Key Responsibilities:
  • Act as the emissary of the architecture.  Diagram milestones and call out red flags before they become problematic.
  • Technical owner from design to resolution of tailored solutions to sophisticated problems on cloud platforms based on client requirements and other constraints.
  • Partners with appropriate stakeholders to determine functional and nonfunctional requirements, as well as business goals, for a set of scenarios.
  • Assess and plan for new technology insertion.
  • Manage risk identification and risk mitigation strategies associated with the architecture.
  • Influence and communicate long-term product vision, technical vision, development strategy and roadmap.
  • Contribute to code reviews, documentation and architectural artifacts.
  • Active leader in the Architecture Practice community, mentoring Engineers and others through Communities of Practice (CoPs) or on project teams, supporting the growth of technical capabilities.


Minimum Qualifications:
  • A Bachelor’s degree or higher in Computer Science or a related field.
  • A minimum of 5+ years of experience/expertise working as a Software Architect, with proficiency in the specified technologies:
  • Azure Cloud Services in a React/Node application environment
  • Microsoft Azure AZ-305 certification (must have)
  • Node.js backend framework
  • Must have TypeScript experience
  • Good to have exposure in NestJs/ExpressJs.
  • Zod schema validation (nice to have)
  • GitHub, GitHub Actions
  • Orchestration: Kubernetes, Azure Service Bus
  • Database: Postgres, Sequelize ORM (MongoDB nice to have)
  • Python for ETL process (nice to have)
  • WorkOS authentication via SSO (nice to have)

  • High level of English proficiency required to interact with a globally-based development team.
  • Communicate in a clear and understandable manner with clients, and be able to articulate the details of the designed architecture using the appropriate level of technical language.
  • Natural leader with critical reasoning and good decision making skills.
  • Ability to raise red flags on the client or team side due to technical blockers
  • Excellent diagramming and planning skills
  • Have extremely good knowledge on SDLC processes and familiarity with actionable metrics and KPIs.
  • Operational excellence in design methodologies and architectural patterns across multiple platforms.
  • Ability to work on multiple parallel projects and utilize time management skills and multitasking capabilities.
  • Experience leading Agile software development methodologies.
  • Experience designing production pipelines: DevOps and CI/CD practices and tools.
  • Demonstrate mentorship and thought leadership to engineers and decision-makers throughout the organization.


Additional Experience Desired:
  • Foundational knowledge in Data Analysis/Modelling/Architecture, ETL Dataflows and  good understanding of highly scalable distributed and cloud-native data stores. Specifically Serverless architecture.
  • Understand and able to write infrastructure as code
  • Policy-based access control systems (e.g., Cerbos, OPA)
  • Multi-tenant SaaS application design
  • Experience in designing applications involving more than one technology platform (web, desktop, mobile). 
  • Experience in designing SaaS or highly scalable distributed applications on the cloud.
  • Financial management experience and ROI calculation.
  • Solutions Architect certification on major cloud platforms (Azure)
  • TOGAF Certified.


What is it like working for 3Pillar Global?
  • At 3Pillar, we offer a world of opportunity:
  • Imagine a flexible work environment - whether it's the office, your home, or a blend of both. From interviews to onboarding, we embody a remote-first approach.
  • You will be part of a global team, learning from top talent around the world and across cultures, speaking English everyday. Our global workforce enables our team to leverage global resources to accomplish our work in efficient and effective teams.
  • We're big on your well-being - as a company, we spend a whole trimester in our annual cycle focused on wellbeing. Whether it is taking advantage of fitness offerings, mental health plans (country-dependent), or simply leveraging generous time off, we want all of our team members operating at their best.
  • Our professional services model enables us to accelerate career growth and development opportunities - across projects, offerings, and industries.
  • We are an equal opportunity employer. It goes without saying that we live by values like Intrinsic Dignity and Open Collaboration to create cutting-edge technology AND reinforce our commitment to diversity - globally and locally.

Join us and be a part of a global tech community!
Check out our Linkedin site and Careers page to learn more about what it's like to be part of our #oneteam!
#LI-Remote


\n

Please mention the word **PEACEFULLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Account Executive
  • Caylent
  • Texas
amazon security training technical
Caylent is a cloud native services company that helps organizations bring the best out of their people and technology using Amazon Web Services (AWS). We provide a full-range of AWS services including workload migrations and modernization, cloud native application development, DevOps, data engineering, security and compliance, and everything in between. At Caylent, our people always come first. We are a global company and operate fully remote with employees in Canada, the United States, and Latin America. We celebrate the culture of each of our team members and foster a community of technological curiosity. Come talk to us to learn more about what it means to be a Caylien! Your Assignment • Communicate via cold calls/emails/social media/in-person meetings with SME prospects. • Manage and nurture relationships with AWS and Clients • Drive net new customer acquisition and scale existing client base • Design, build, and test new outreach and nurture campaigns. • Coordinate closely with content, marketing, and lead generation providers. • Drive revenue by winning new services business and/or expand existing engagements. • Attend cloud workshops and training to boost specific skills and possible certifications around cloud, Kubernetes, and DevOps. • Engage with AWS, and other partners at the tactical and strategic level. Your Qualifications • 5+ years of B2B sales experience selling managed cloud services and/or DevOps consulting. • Experience selling AWS, and related services is highly desired. • Great verbal communication and presentation skills. • Assist with creating proposals & SOWs • Negotiate contracts, deliverables and price. • Enthusiasm to work in a startup environment and ability to be cross-functional. • Possess natural curiosity and excitement to learn new technology, sell and succeed as an individual and as a team. • Proven track record of sourcing and closing $250K+ ARR deals successfully. • Ability to travel 10-25% of the time. • Technical Background in DevOps or Cloud is preferred.

Please mention the word **AWARDS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Staff DevOps Engineer
  • Life360
  • Remote
security devops mobile engineer

About Life360

Life360's mission is to keep people close to the ones they love. Our category-leading mobile app,Tile tracking devices, and Pet GPS tracker empower members to protect the people, pets, and things they care about most with a range of services, including location sharing, safe driver reports, and crash detection with emergency dispatch. Life360 serves approximately 91.6 million monthly active users (MAU), as of September 30, 2025, across more than 180 countries.

Life360 delivers peace of mind and enhances everyday family life with seamless coordination for all the moments that matter, big and small. By continuing to innovate and deliver for our customers, we have become a household name and the must-have mobile-based membership for families (and those friends who are basically family).

Life360 has more than 500 (and growing!) remote-first employees. For more information, please visit life360.com.

Life360 is a Remote-First company, which means a remote work environment will be the primary experience for all employees. All positions, unless otherwise specified, can be performed remotely (within the US) regardless of any specified location above. 

About The Team

The Horizons DevOps and Infrastructure team supports large-scale, data-intensive platforms that power real-time adtech and data science workloads across the organization. The team owns and operates critical infrastructure and data platforms, including Databricks, Snowflake, Apache Airflow, and Kubernetes-based services, processing fifty billions of requests and tens of terabytes of data daily. Working closely with data engineering, data science, and security teams, the group focuses on building reliable, scalable, and automated systems that enable high-throughput data processing, analytics, and ML workflows. Team members take end-to-end ownership of production systems, influence architectural direction, and play a key role in evolving the platform as the organization integrates new technologies and scales further.

About the Job

We are seeking a

Please mention the word **PORTABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
EU GO Senior Software Engineer
  • Connectly
  • Remote/Greece
software system frontend python

At Connectly we are building the future of conversational commerce in Latin America with the focus on Whatsapp. Instead of shoppers installing yet another app, we are offering a 360 engagement platform for retailers inside of an app that everyone already have on their phone - Whatsapp. 


We are a VC-backed Series B startup with a world-class team hailing from Meta, Google, Uber, and other top Silicon Valley companies. We operate as a hybrid company, with offices in Bogotá and San Francisco, and a remote-first culture everywhere else.

\n


Job summary
  • We’re looking for an exceptional Senior Backend Engineer with strong Go (Golang) expertise and experience designing large-scale distributed systems.
  • You’ll work across backend and frontend domains, collaborating closely with product, sales, and AI platform teams to design, prototype, and launch powerful conversational experiences for some of Latin America’s largest retailers. This is a role for an independent problem solver who enjoys both deep technical challenges and high-impact product thinking.


Responsibilities include:
  • Design, build, and maintain distributed backend systems using Go, AWS, Kafka, Postgres, and DynamoDB.
  • Collaborate cross-functionally with product managers, designers, and enterprise partners to define user journeys, performance goals, and success metrics.
  • Own critical parts of Connectly’s platform infrastructure — from messaging orchestration to data pipelines and API integrations.
  • Collaborate closely with product, AI, and frontend teams to deliver scalable, customer-facing features.
  • Ensure reliability, observability, and operational excellence across all services.
  • Establish, track, and iterate on performance metrics, leveraging data to optimize outcomes and drive measurable business results.
  • Work asynchronously with global teams, maintaining strong communication and documentation.
  • Plan and manage your workstream, making thoughtful tradeoffs between deadlines, quality, and innovation.
  • Mentor teammates, contribute to code reviews, and uphold engineering best practices in a fast-moving, distributed environment.


What will make you excel at this job:
  • Exceptional communication skills with both technical and non-technical stakeholders.
  • Deep attention to detail paired with strong system-level thinking; you can zoom out to strategy and dive deep into code.
  • A bias for action and results, with comfort navigating ambiguity and evolving product needs.
  • Genuine curiosity and a drive to stay ahead of the rapidly changing AI landscape.
  • Balance of product sense and technical rigor; you care as much about user experience as you do about system performance.
  • Experience with cloud infrastructure (AWS) and event-driven architectures.
  • Solid understanding of system design, concurrency, and data consistency.
  • Pragmatic approach to engineering; you balance simplicity, reliability, and speed.


Requirements
  • BS or MS in Computer Science or related technical field.
  • 5+ years of experience in hands-on software engineering roles.
  • Proven track record building and scaling enterprise systems using Go, AWS, Kafka, Postgres, and/or DynamoDB.
  • Experience with Python is a plus.
  • Experience with frontend engineering (React, TypeScript, etc.) is a plus.
  • Prior experience developing or deploying WhatsApp conversational applications is a strong plus.
  • Experience working in fast-paced, customer-centric environments, ideally in a startup or high-growth tech company.
  • Based in Europe; remote-first with occasional team offsites.


Benefits
  • Work alongside an exceptional, mission-driven team in a culture that values curiosity, impact, and continuous learning.
  • Competitive compensation with equity participation.
  • Unlimited time off and flexible working hours.
  • Flexible working hours and remote-first culture across the EU.


\n

We are a strong believer in passion, curiosity and willingness to learn on the job. If you are in doubt, we encourage you to apply! 


Connectly is an equal opportunity employer. We’re committed to building a diverse, inclusive, and supportive workplace that is distributed around the world.



Please mention the word **EMINENCE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Frontend Tech Lead
  • AirDNA
  • Remote
frontend design react training

About AirDNA

We built AirDNA to solve a problem: how do you make smart short-term rental decisions when there’s too much guesswork and not enough good data?


What started in a garage in California in 2015 is now a global team helping thousands of people — from aspiring hosts to major real estate firms — make confident choices about where to invest, what to charge, and how to grow.


Our mission is simple: give people the tools they need to build freedom through short-term rentals. Whether that means buying their first Airbnb or scaling a portfolio, we’re here to help unlock financial independence and growth.


We track 10M+ listings in 120,000 markets, and our platform is trusted by users in over 100 countries. It’s big data, made useful.


In 2023, AirDNA acquired Uplisting, a powerful property management software that helps hosts and operators manage listings across Airbnb, Vrbo, and other platforms. With features like channel management, automated messaging, dynamic pricing, task coordination, and financial reporting, Uplisting expands our mission to support every stage of the short-term rental journey — from investment to operations.


The AirDNA team

We’re a curious, driven, and kind group of humans who genuinely love what we do. Our values — Happy, Hungry, Honest — guide how we show up for our customers and for each other.


Want to see what that looks like in action? You’ll get a feel once you meet us.

We welcome applicants from all backgrounds and encourage you to apply even if you don’t check every box. Passion, potential, and perspective matter here.


The Role

AirDNA is looking for a Frontend Tech Lead to help shape the future of our product experience and technical direction. While this role is full-stack, you will be the technical driver for our frontend guild, pushing forward our React/TypeScript architecture, design systems, and developer experience. You’ll partner with Product, Design, and Engineering leaders to deliver beautiful, performant, and scalable customer-facing applications. As a Tech Lead, you’ll guide technical decisions across squads, mentor engineers, and help set the long-term direction of our frontend practice.

\n


Here's what you'll get to do:
  • Lead frontend technical strategy: Define best practices, champion modern frontend architecture, and drive adoption of component libraries, state management patterns, and performance optimizations.
  • Build customer-facing features: Work as a hands-on engineer in your squad, implementing features with React, TypeScript, Next.js, and associated libraries.
  • Shape the frontend guild: Facilitate guild discussions, align engineers across squads, and promote knowledge-sharing and consistency in our frontend stack.
  • Mentor and grow engineers: Coach junior and mid-level developers, review code, and help engineers build strong frontend skills.
  • Collaborate cross-functionally: Partner with Product Managers, Designers, Data Scientists, and Backend Engineers to deliver features that delight customers.
  • Contribute full-stack when needed: While you’re frontend-leaning, you’ll occasionally dive into backend services (Python, AWS, APIs, Kubernetes) to deliver end-to-end solutions.
  • Drive engineering excellence: Influence tooling, CI/CD, testing, and monitoring strategies that improve developer velocity and reliability.
  • Represent engineering: Serve as a technical leader in planning sessions, roadmap discussions, and cross-team initiatives.


Here's what you'll need to be successful:
  • Experienced: 8+ years of professional software engineering, with at least 5 years of recent experience in React and TypeScript.
  • Frontend expert: You’ve scaled and optimized large-scale SPAs, understand rendering/performance tradeoffs, and care deeply about accessibility and design fidelity.
  • Full-stack capable: You’re comfortable contributing to backend systems (Python/Django/FastAPI, AWS, data pipelines) when the team needs it.
  • Technical leader: You’ve led technical discussions, influenced architecture decisions, and aligned teams toward common engineering standards.
  • Mentor: You enjoy leveling up others, giving thoughtful feedback, and guiding careers.
  • Collaborator: You thrive in cross-functional environments and can translate business goals into technical strategy.
  • Forward-thinking: You stay current on frontend trends, evaluate emerging tools, and bring pragmatic innovation to the team


Here's what would be nice to have:
  • Experience with design systems and component libraries (e.g., Storybook, Radix, Styled Components).
  • Experience with React Query, Recoil, Redux, or other state/data management approaches.
  • Experience with Google Maps API or other data visualization libraries (D3, Leaflet, Mapbox).
  • Strong background in CI/CD pipelines (GitLab preferred) and containerization (Docker/Kubernetes).
  • Familiarity with headless CMS platforms (Prismic, Contentful).
  • Experience with data-intensive apps, large-scale visualizations, or personalization at scale.


Here's what you can expect from us:
  • Competitive cash compensation and benefits, the salary for this position is $130,000 - $175,000 per year. 
  • Colorado Salary Statement: The salary range displayed in specifically for those potential hired who will work or reside in the state of Colorado if selected for this role. Any offered salary is determined based on internal equity, internal salary ranges, market data/ranges, applicant’s skills and prior relevant experience, certain degrees and certifications. 
Benefits include: 
  • Medical, dental, and vision packages to meet your needs
  • Unlimited vacation policy; take time when you need it 
  • Quarterly team outings 
  • 401K with employer match up to 4%
  • Continuing education stipend
  • Lunch is provided Tuesday to Thursday for those in the Denver office
  • Commuter/RTD benefit for Denver based employees
  • 16 weeks of paid parental leave
  • New MacBooks for employees
  • Pet-friendly!


\n

AirDNA seeks to attract the best-qualified candidates who support the mission, vision and values of the company and those who respect and promote excellence through diversity. We are committed to providing equal employment opportunities (EEO) to all employees and applicants without regard to race, color, creed, religion, sex, age, national origin, citizenship, sexual orientation, gender identity and expression, physical or mental disability, marital, familial or parental status, genetic information, military status, veteran status or any other legally protected classification. The company complies with all applicable state and local laws governing nondiscrimination in employment and prohibits unlawful harassment based on any of the aforementioned protected classes at every location in which the company operates. This applies to all terms, conditions and privileges of employment including but not limited to: hiring, assessments, probation, placement, benefits, promotion, demotion, termination, layoff, recall, transfer, leave of absence, compensation, training and development, social and recreational programs, education assistance and retirement. 


We are committed to making our application process and workplace accessible for individuals with disabilities. Upon request, AirDNA will reasonably accommodate applicants so they can participate in the application process unless doing so would create an undue hardship to AirDNA or a threat to these individuals, others in the workplace or the company as a whole. To request accommodation, please email compliance@airdna.co. Please allow for 24 hours to process your request. 


By applying for the above position, you will confirm that you have reviewed and agreed to our Data Privacy Notice for Applicants.



Please mention the word **PRICELESS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$175000 - $250000 Full time
Security Engineer
  • PermitFlow
  • New York City
security frontend architect software

PermitFlow is redefining how America builds. We’re an applied AI company serving the nation’s builders, tackling one of the largest information challenges in the economy: understanding what can be built, where, and how. Our AI agent workforce helps the fastest-growing construction companies navigate everything from permitting and licensing to inspections and project closeouts – accelerating housing, clean-energy, and infrastructure development across the country.

Despite being a $1.6T industry, construction still suffers from massive delays, wasted capital, and lost opportunity. PermitFlow has already delivered unprecedented speed, accuracy, and visibility to over $20B in development, helping contractors reduce compliance time, de-risk projects, and scale with confidence.

America is entering a CAPEX super-cycle, from data centers and factories to housing and renewables, and joining PermitFlow is building the AI at the heart of every construction project powering the next wave of re-industrialization.

We’ve raised over $90M, most recently completing our Series B, from top-tier investors including Accel, Kleiner Perkins, Initialized, Y Combinator, Felicis, and Altos Ventures, with backing from leaders at OpenAI, Google, Procore, ServiceTitan, Zillow, PlanGrid, and Uber.

Role Overview

As a Security Engineer, you’ll join our growing platform team in building, scaling, and fine-tuning the systems that keep our platform secure and compliant. You’ll help architect the security backbone of our platform, focusing on compliance, risk reduction, security automation, and continuous improvement. While your primary responsibility will be security and governance, coding and problem-solving across the stack are core parts of the role. As a fast-growing startup, we all roll up our sleeves where needed, so flexibility and a collaborative, security-first mindset are key.

What You'll Do

  • Architect, design, and implement secure, compliant, scalable, and cost-efficient infrastructure solutions to protect a rapidly growing product.

  • Lead the execution and maintenance of our SOC2 compliance program and other security-related certifications.

  • Design, implement, and audit Role-Based Access Controls (RBAC), Identity and Access Management (IAM), and secrets management systems.

  • Design and implement security best practices for backend, frontend services, APIs, and data pipelines.

  • Own security features end-to-end, from architecture and implementation to testing and production deployment.

  • Develop and maintain security automation, Infrastructure as Code, and secure CI/CD pipelines.

  • Implement and manage security monitoring, threat detection, and vulnerability management across our cloud infrastructure.

  • Establish and enforce security best practices for authentication, authorization, logging, and alerting.

  • Lead and participate in incident response, troubleshooting complex security issues and driving postmortem learning and improvements.

  • Collaborate across engineering teams to embed security into the software development lifecycle and balance compliance, velocity, and cost.

What We're Looking For

  • 5+ years of experience in Security Engineering, AppSec, GRC, or similar roles.

  • Proven experience designing and implementing security controls for SOC2, ISO 27001, or similar compliance frameworks.

  • Deep expertise in Role-Based Access Controls (RBAC), Identity and Access Management (IAM), and secrets management.

  • Strong experience with container security and orchestration (Docker, ECS, Kubernetes a plus).

  • Expertise with secure CI/CD pipelines and modern security automation tools.

  • Coding and scripting proficiency (TypeScript, Python, Go, Bash, etc.).

  • Hands-on experience with cloud security (GCP preferred) and securing distributed systems.

  • Familiarity with monitoring, observability, and incident management best practices.

  • Comfortable working in a fast-paced, compliance-focused startup environment, where adaptability and security ownership are essential.

What We Offer

  • Competitive salary and meaningful equity in a high-growth company

  • Comprehensive medical, dental, and vision coverage

  • Flexible PTO and paid family leave

  • Home office & equipment stipend

  • Hybrid NYC office culture (3 days in-office/week) with direct access to leadership

  • In-Office Lunch & Dinner Provided

PermitFlow provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, genetics, sexual orientation, gender identity, gender expression, or family status, as protected by applicable law.


We are committed to a diverse and inclusive workforce and welcome people from all backgrounds, experiences, perspectives, and abilities. All employment decisions are based on merit, qualifications, and business needs.



Please mention the word **REFORM** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$100000 - $150000 Full time
Principal Software Engineer
  • Recorded Future
  • Boston, MA
software design architect technical
With 1,000+ intelligence professionals serving over 1,900 clients worldwide, Recorded Future is the world’s most advanced, and largest, intelligence company! We’re looking for a Principal Software Engineer to help design, build, and scale the systems that power our Attack Surface Intelligence module. You’ll be taking ownership of critical data pipelines responsible for the ingestion and distribution of critical intelligence signals, both internally and directly to customers via the product. The Attack Surface Intelligence Data Engineering team is responsible for two key datasets: our holistic global internet inventory and the technical artifacts of our customers’ attack surface. This role reports directly to the Engineering Owner for Attack Surface Intelligence Data and is ideal for someone who enjoys writing clean, maintainable code and thrives in distributed systems environments. You'll work closely with product management and other engineering teams to drive technical strategy and ensure our systems are reliable, performant, and insightful. What You’ll Do: Lead the design and implementation of backend services and APIs in Python. Architect and evolve microservice-based systems for scalability and resilience.

Please mention the word **FORTUNATE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
c# front-end software backend
NEORIS ahora parte de EPAM, es un acelerador Digital que ayuda a las compañías a entrar en el futuro, teniendo 20 años de experiencia como Socios Digitales de algunas de las mayores compañías del mundo. Somos más de 4,000 profesionales en 11 países, con nuestra cultura multicultural de startup en donde cultivamos innovación, aprendizaje continuo para crear soluciones de alto valor para nuestros clientes. Estamos en búsqueda del talento que ocupe la posición como Desarrollador .NET/SQL. Profesional en Ingeniería de Sistemas, Informática o afines, con al menos 3 años de experiencia en desarrollo de software usando C#, .NET Framework y .NET Core, capaz de participar en el ciclo completo del desarrollo, proponer mejoras técnicas y trabajar en equipo para implementar soluciones, resolver errores y aportar innovación en las herramientas tecnológicas del área. Principales responsabilidades: - Diseñar y desarrollar la lógica de negocio y los sistemas backend del producto. - Trabajar en estrecha colaboración con los desarrolladores de front-end para diseñar y desarrollar APIs funcionales, eficaces y completas. - Descifrar los sistemas de software de las aplicaciones legacy existentes y ser capaz de integrar la aplicación a las fuentes de datos aplicables. Requerimientos: - .NET / C# – Desarrollo de aplicaciones backend, construcción de APIs, servicios y lógica de negocio. - ETLs – Diseño, construcción y mantenimiento de procesos de extracción, transformación y carga de datos. - SQL Server – Manejo avanzado de consultas, stored procedures, modelado de bases de datos y opti

Please mention the word **DELIGHTFUL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
content senior engineer backend

ABOUT onX

As a pioneer in digital outdoor navigation with a suite of apps, onX was founded in Montana, which in turn has inspired our mission to awaken the adventurer inside everyone. With more than 400 employees located around the country working in largely remote / hybrid roles, we have created regional “Basecamps” to help remote employees find connection and inspiration with other onXers. We bring our outdoor passion to work every day, coupling it with industry-leading technology to craft dynamic outdoor experiences.

Through multiple years of growth, we haven't lost our entrepreneurial ethos at onX. We offer a fast-paced, growing, tech-forward environment where ownership, accountability, and passion for winning as a team are essential. We value diversity and believe it leads to different perspectives and inspires both new adventures and new growth. As a team, we're hungry to improve, value innovation, and believe great ideas come from any direction.

Important Alert: Please note, onXmaps will never ask for credit card or SSN details during the initial application process. For your digital safety, apply only through our legitimate website at onXmaps.com or directly via our LinkedIn page.

WHAT YOU WILL DO

onX is seeking a talented Senior Backend Engineer to join our Content Delivery team. In this role, you will build the backend infrastructure that powers offline map experiences for millions of outdoor enthusiasts. You will work on high-performance data pipelines, map tile generation and delivery systems, and large-scale geospatial

Please mention the word **STUNNING** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

$$$ Full time
Principal Data Engineer
  • Waymark
  • Remote
technical health healthcare engineer
About Waymark Waymark is a mission-driven team of healthcare providers, technologists, and builders working to transform care for people with Medicaid benefits. Our community-based care teams—powered by proprietary data science and ML technologies—support care for tens of thousands of Medicaid members across multiple states, driving measurable reductions in avoidable emergency department visits and hospitalizations. We're designing tools and systems that bring care directly to those who need it most—removing barriers and reimagining what's possible in Medicaid healthcare delivery and we are seeking a highly experienced Data Engineer to join this mission. This is a principal-level individual contributor role who combines deep backend engineering fundamentals with specialized expertise in Electronic Health Record (EHR) data integration. You will report to data engineering leadership and setting the technical direction for our clinical data platform by leading the design, development, and optimization of data pipelines that ingest, normalize, and transform clinical data from diverse EHR and payer systems. If this resonates with you, we invite you to bring your creativity, energy, and curiosity to Waymark. Key Responsibilities EHR & Partner Integrations Architect production-grade data pipelines that integrate clinical data through multiple channels—direct EHR connections (e.g., Epic, Cerner, Athenahealth), health information exchanges (HIEs), health alliance networks, and third-party integration vendors—via

Please mention the word **RIGHTEOUSLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Backend Engineer Integrations
  • Arbiter AI
  • New York City
design system python technical

Arbiter is the AI-powered care orchestration system that unites healthcare. We are launching our best-in-class, patient-facing Agentic platform to optimize patient outcomes through a unique multimodal approach. We optimize complex healthcare workflows that interface with patients using the latest Agentic AI approaches, and we combine it with a sophisticated platform to serve this Agentic layer at scale. We are looking for expert engineers and leads to join our team and help us push the frontier of what's possible with Agentic workflows + Healthcare.

Backed by one of the largest seed rounds in health tech history and operators who bring the expertise and distribution to scale nationally, we're building the connected infrastructure healthcare should have had all along.

Our Engineering Culture & Values

We are a high-performing group of engineers dedicated to delivering innovative, high-quality solutions to our clients and business partners. We believe in:

  • Engineering Excellence: Taking immense pride in our technical craft and the products we build, treating both with utmost respect and care.

  • Impact-Driven Development: Firmly committed to engineering high-quality, fault-tolerant, and highly scalable systems that evolve seamlessly with business needs, minimizing disruption.

  • Collaboration Over Ego: Valuing exceptional work and groundbreaking ideas above all else. We seek talented individuals who are accustomed to working in a fast-paced environment and are driven to ship often to achieve significant impact.

  • Continuous Growth: Fostering an environment of continuous learning, mentorship, and professional development, where you can deepen your expertise and grow your career.

Responsibilities

As a Senior Backend Engineer, you will design, build, and operate the platform systems that power Arbiter's connections to the outside world and ensure reliable, performant data exchange across a complex ecosystem. You will own critical parts of our backend infrastructure, from API design and service orchestration to data pipelines and third-party system connectivity, working closely with product, engineering, and customer teams to ship production-grade systems with real customer dependency.

  • Platform Architecture & Backend Systems: Design, develop, and operate backend services that power Arbiter's core platform, with an emphasis on reliability, modularity, and clean system boundaries.

  • External System Connectivity: Build and maintain robust connections to third-party systems (e.g. cloud APIs, AI services, data exchange services, EHRs, telephony platforms). Own the abstractions that make these integrations reusable and adaptable across customers with minimal rework.

  • API Design & Data Exchange: Design and operate high-scale APIs (REST, gRPC, webhooks) and manage complex data flows including real-time streaming, batch processing, file-based exchange (e.g. SFTP, HL7, EDI), and event-driven pipelines.

  • Performance & Reliability: Ensure high throughput, low latency, and fault tolerance across backend services through strong system design, monitoring, alerting, and operational best practices. Handle vendor failures, retries, idempotency, and graceful degradation.

  • Data Engineering & Pipeline Ownership: Build and maintain ETL/ELT pipelines, manage schema evolution, and ensure data quality and integrity across systems with varying formats, standards, and reliability.

  • Infrastructure & Deployment Excellence: Implement and uphold best practices for CI/CD, testing, observability, and deployment of backend systems in production cloud environments.

  • Cross-Functional Execution: Partner closely with AI engineers, product managers, implementation teams, and customer stakeholders to translate ambiguous, high-impact problems into scalable technical solutions.

  • Technical Leadership & Mentorship: Mentor engineers, contribute to internal documentation and standards, influence technical direction, and raise the overall engineering bar.

  • Ownership & On-Call: Take end-to-end ownership of critical systems, including participating in on-call rotations and leading incident resolution when production issues arise.

Minimum Qualifications

  • 5+ years of hands-on experience building and operating production backend systems in high-availability environments.

  • Computer Science or Engineering degree, or equivalent practical experience.

  • Experience building and maintaining large-scale Python codebases with strong opinions on structure, quality, and tradeoffs.

  • Deep understanding of API design patterns, versioning, backward compatibility, and managing breaking changes across consumers.

  • Experience building reusable abstraction layers or connector frameworks that allow a single integration pattern to serve multiple customers or vendors.

  • Proven experience designing systems that connect to third-party services, including handling authentication, rate limiting, retry logic, and failure modes gracefully.

  • Strong understanding of concurrency, scalability, reliability, and distributed systems patterns.

  • Hands-on experience with data pipeline architectures: batch and streaming, schema management, and data quality enforcement.

  • Experience with cloud infrastructure (AWS, GCP, or Azure) and production deployments.

  • Strong communication skills and ability to work effectively across functions.

  • Proficiency with AI-assisted development tools (e.g., Cursor, Claude Code, GitHub Copilot).

  • Track record of delivering complex systems end-to-end with minimal oversight.

Preferred Qualifications

  • Experience with healthcare data exchange standards (HL7, FHIR, EDI) or similarly complex domain-specific protocols in other industries (fintech, telecom, logistics) is a plus.

  • Familiarity with database performance tuning, query optimization, and managing large-scale relational databases (PostgreSQL, CloudSQL).

  • Startup or early-stage experience operating in fast-moving, high-ambiguity environments.

This role can be remote or on-site, based in our New York City or Boca Raton offices, in a fast-paced, collaborative environment where great ideas move quickly from whiteboard to production.

Job Benefits

We offer a comprehensive and competitive benefits package designed to support your well-being and professional growth:

  • Highly Competitive Salary & Equity Package: Designed to rival top FAANG compensation, including meaningful equity.

  • Generous Paid Time Off (PTO): To ensure a healthy work-life balance.

  • Comprehensive Health, Vision, and Dental Insurance: Robust coverage for you and your family.

  • Life and Disability Insurance: Providing financial security.

  • Simple IRA Matching: To support your long-term financial goals.

  • Professional Development Budget: Support for conferences, courses, and certifications to fuel your continuous learning.

  • Wellness Programs: Initiatives to support your physical and mental health.

Pay Transparency

The annual base salary range for this position is $148,500-$190,000. Actual compensation offered to the successful candidate may vary from the posted hiring range based on work experience, skill level, and other factors.



Please mention the word **LAUDABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Software Engineer
  • Clover Health
  • USA
software design financial cloud
At Counterpart Health, we are transforming healthcare and improving patient care with our innovative primary care tool, Counterpart Assistant. By supporting Primary Care Physicians (PCPs), we are able to deliver improved outcomes to our patients at a lower cost through early diagnosis and longitudinal care management of chronic conditions. We are looking for Software Engineers who are eager to tackle a variety of challenges. In this role, you will collaborate with developers, data scientists, and healthcare professionals to build tools that improve real-world health outcomes. As a Software Engineer, you will: - Simplify the complexities of healthcare by building scalable systems that enhance human efforts. - Stay up-to-date with new tools and technologies to solve challenges and advance our goals. - Help define and maintain development best practices to enable rapid iteration while ensuring quality, including writing tests and documenting key implementations. - Work with Product Managers and operational teams to design and develop new features. You should get in touch if: - You have 3+ years of experience as a Software Engineer with proficiency in Python, JavaScript, or Go. - You have experience writing SQL queries in databases such as Postgres, MySQL, BigQuery, Snowflake, or similar systems. - You are comfortable working with data pipelines, including cleaning, normalizing, and improving data quality. - You can create and call RESTful APIs (experience with gRPC is a plus). - You have experience working with cloud services such as GCP or AWS. Benefits Overview: - Financial Well-Being: Our commitment to attracting and r

Please mention the word **HAPPIER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Customer Program Manager
  • Nexxa.AI
  • Sunnyvale
manager jira training consulting

Customer Program Manager

Cross-Site Project Coordination | Schedule & Risk Management | High-Visibility Communication | SF Bay Area, CA

ABOUT NEXXA

Nexxa.ai is building artificial super intelligence for heavy industries — enabling machines, systems and operations to think, decide and act autonomously across manufacturing, large-scale infrastructure, logistics and legacy environments. Our mission is to translate deep technical breakthroughs into operational reality, solving some of the hardest systems-level problems in industry.

THE ROLE

Reporting to CPO

We're hiring a Customer Program Manager to be the operational backbone of our customer delivery engine. You'll manage project schedules, status visibility, and cross-site coordination across Applied AI and core engineering teams operating across global sites — ensuring every engagement ships on time with full visibility. You'll work alongside a Delivery Manager who owns the customer relationship and outcome quality, core-engineering remote project manager. Your job is to make sure the delivery machine runs — schedules are tracked, risks are flagged early, handoffs are clean, and every stakeholder knows exactly where things stand at any given moment.

WHAT YOU'LL DO

  • Manage end-to-end project schedules for customer engagements across Applied AI (FDE team) and core engineering teams spanning multiple geographies and time zones

  • Maintain real-time project status visibility — Confluence boards, Jira tracking, weekly status reports — so leadership, engineering, and the Delivery Manager always have a single source of truth

  • Run internal project review cadences: bi-weekly planning reviews, customer submissions reviews, and dev question sessions across all active engagements

  • Proactively identify risks, dependencies, and blockers before they become surprises — escalate to the Delivery Manager with proposed mitigations, not after deadlines slip

  • Own cross-site coordination across multiple sites — bridging time zones, aligning handoffs, and ensuring nothing falls between teams

  • Drive daily and weekly status updates across all active projects — post EOD updates in team channels with key changes, blockers, and next actions tagged to DRIs

  • Prepare and deliver weekly internal status reports to the CPO every Friday — consolidating project health, risk register, and upcoming milestones across all accounts

  • Track and maintain delivery governance artifacts: project plans, feedback/release trackers, QA checklists, go-live readiness assessments

  • Coordinate resource allocation and capacity planning across FDEs and engineering — flag overload risks and propose reallocation before quality suffers

  • Ensure Jira hygiene: correct assignees, updated due dates, closed tickets, and clean backlogs — so automated reporting and AI tools produce accurate outputs

  • Support the Delivery Manager in preparing customer-facing materials: milestone review decks, progress summaries, and QBR data

HOW THIS ROLE WORKS WITH THE DELIVERY MANAGER

The CPM and Delivery Manager share the delivery mission but own different dimensions:

  • You own: project schedules, daily/weekly status tracking, Jira hygiene, cross-site coordination, Confluence boards, internal reporting, resource capacity flagging, and governance artifact maintenance

  • Delivery Manager owns: customer relationship, outcome definition, delivery quality sign-off, CSAT/NPS, escalation resolution, post-delivery retrospectives, and account expansion insights

  • Together: the DM ensures we deliver the right thing at the right quality; you ensure we deliver it on schedule with full visibility and zero surprises

WHAT WE'RE LOOKING FOR

  • 5+ years in technical program management, project management, or delivery management — with at least 2 years managing cross-functional, cross-site engineering teams

  • Proven experience managing 3–5 concurrent external facing projects simultaneously without dropping balls — you have a system, not just hustle

  • Strong command of project management tooling: Jira, Confluence, Rocketlane (or similar), and spreadsheet-based reporting. You're the person who keeps these tools clean and current.

  • Experience coordinating across time zones and distributed teams — you've worked with India/APAC engineering teams and know how to structure async handoffs

  • Excellent written communication — your status updates are crisp, your escalations are clear, and your meeting notes are actionable. You don't write paragraphs; you write bullet points with owners and dates.

  • Technical fluency — you can read architecture docs, understand data pipeline concepts, and have productive conversations with engineers about scope, effort, and trade-offs. You don't need to code, but you need to understand the work.

  • Anticipatory mindset — you see risks coming before they materialize. You flag a Milestone 1 delivery risk on Monday, not on Thursday when it's due.

  • Experience in enterprise SaaS, consulting delivery, or systems integration. Heavy industry experience (manufacturing, supply chain, energy) is a strong plus.

KEY SUCCESS INDICATORS

  • 100% of active projects have up-to-date Confluence boards with milestones, DRIs, and dates — refreshed daily, not weekly

  • Zero surprise delays — risks are flagged at least 1 week before they impact a deadline, with proposed mitigations

  • Weekly status reports delivered to Shashank (CPO) every Friday for Monday leadership calls — no exceptions, no late submissions

  • Customer communication cadence running on schedule: weekly updates sent, bi-weekly check-ins held, milestone reviews documented

  • Cross-site engineering alignment verified at every handoff — India team has clear specs, context, and deadlines before they start work

  • Jira data quality at 100% — accurate assignees, no stale tickets, closed items marked done. Automated reports pull clean data.

  • Resource conflicts identified and escalated before they impact delivery — capacity planning is proactive, not reactive

NICE TO HAVE

  • Experience with Rocketlane, Asana, or Monday.com for customer-facing delivery management

  • Prior experience at a fast-growing startup (seed to Series B) where you built the PM process from scratch

  • Experience working with AI/ML engineering teams — understanding model training timelines, data pipeline dependencies, and iterative delivery cycles

  • Familiarity with enterprise procurement and vendor management processes (purchasing control towers, SOW reviews, NDA workflows)

WHY NEXXA

  • Architect the intelligence layer for the world's largest industrial companies — your designs will run with top Fortune 100 companies

  • Work directly with the CPO and CTO on every engagement — ZERO layers of bureaucracy

  • Backed by silicon valley top VCs, with access to their portfolio network and enterprise resources

  • Early-stage equity with significant upside



Please mention the word **PEP** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
manager training technical supervisor

HHAeXchange is the leading technology platform for home and community-based care. Founded in 2008, HHAeXchange was born out of an idea to create a fully comprehensive end-to-end homecare solution to help people who are aging or have disabilities thrive in their homes and communities. Our employees are passionate about transforming the healthcare space by building the only homecare ecosystem that fully connects patients, personal care providers, managed care organizations, and states.  

HHAeXchange is seeking a Product Manager, Data Management & Platform to help define, govern, and scale how data is used across our healthcare platform. This role sits at the intersection of Product, Engineering, and Clinical/Financial operations, ensuring that the data powering RCM, EHR, Payroll, Payments, and the Universal Patient Record is accurate, connected, and trusted — and that it serves as a reliable foundation for AI-driven innovation.

This is an individual contributor role for a healthcare product professional who understands real-world clinical and financial workflows, is energized by the potential of AI to transform healthcare data, and can translate complex requirements into clear, actionable product decisions. The ideal candidate brings 5–7 years of product management experience in healthcare IT, a solid grasp of data platform concepts, and a genuine enthusiasm for applying AI and machine learning to solve meaningful problems in the home care space.

To perform this job successfully, an individual must be able to perform each essential job duty satisfactorily with or without reasonable accommodation.  Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

This is a fully remote opportunity for candidates located in the EST or CST time zones within the US only.

\n


Essential Job Duties

Product-Led Data Strategy

  • Contribute to and help execute the product vision and roadmap for HHAeXchange's enterprise data platform.
  • Define how core clinical, operational, and financial data is modeled, linked, and surfaced across the product ecosystem.
  • Partner with domain PMs (RCM, EHR, Payroll, Payments) to align data structures to real-world workflows and end-user needs.
  • Identify opportunities to reduce data fragmentation and improve consistency across product domains.

AI Enablement & Innovation

  • Serve as a product champion for AI and machine learning use cases built on the HHAeXchange data platform.
  • Define and prioritize data requirements that enable AI-driven features including predictive analytics, anomaly detection, automation, and intelligent recommendations.
  • Work with data science and engineering teams to ensure training data quality, feature pipelines, and model outputs are properly governed and trustworthy.
  • Evaluate and recommend AI tools, platforms, and frameworks that can accelerate product delivery and enhance the platform's intelligence capabilities.
  • Stay current on emerging AI/ML trends in healthcare — including generative AI, LLM applications, and agentic workflows — and translate relevant developments into product opportunities.
  • Champion responsible AI practices, including fairness, explainability, and compliance considerations relevant to healthcare data.

Healthcare Data Enablement

  • Ensure data models support claims, visits, authorizations, care plans, payroll, and payer rules.
  • Translate regulatory, audit, and reimbursement requirements into data standards and traceability.
  • Improve data lineage and reconciliation across payer-provider workflows.
  • Support the development of a Universal Patient Record that is complete, current, and usable across the platform.

Cross-Team Execution

  • Collaborate closely with Engineering, Architecture, and Platform teams to shape data services, APIs, and pipelines.
  • Write clear product requirements, user stories, and acceptance criteria for data platform features.
  • Prioritize data initiatives based on customer impact, revenue risk, compliance needs, and scalability.
  • Drive alignment across product teams on shared data definitions, metrics, and reporting standards.

Governance & Data Quality

  • Support the definition of data ownership, stewardship, and quality standards across product domains.
  • Help establish validation, monitoring, and escalation processes for data defects.
  • Create visibility into data health for product leaders, operations teams, and stakeholders.
  • Contribute to documentation of data standards and governance policies.


Other Job Duties
  • Other duties as assigned by supervisor or HHAeXchange leader.


Travel Requirements
  • Travel 10-25%, including overnight travel


Required Education, Experience, Certifications and Skills

Required 

  • 5–7 years of experience in product management within healthcare IT, preferably in RCM, EHR, or payer-provider platforms.
  • Solid understanding of claims workflows, clinical documentation, authorizations, eligibility, and reimbursement processes.
  • Demonstrated interest in and experience with AI, machine learning, or advanced analytics applied to healthcare data.
  • Familiarity with data platforms, data warehouses or lakehouses, and analytics and reporting tools.
  • Ability to partner effectively with Engineering and Architecture on platform-level systems and data infrastructure.
  • Working knowledge of healthcare data regulations and compliance requirements (e.g., HIPAA, Medicaid program integrity, EVV).
  • Strong written and verbal communication skills, including the ability to translate technical data concepts for non-technical stakeholders.
  • Experience writing product requirements, managing a backlog, and driving delivery in an agile environment.
  • Curiosity, adaptability, and a proactive mindset in a fast-evolving product environment.

Preferred

  • Experience with AI/ML product development, including defining data pipelines, feature requirements, or model evaluation criteria.
  • Familiarity with generative AI tools and their application in healthcare workflows (e.g., clinical documentation, billing, analytics).
  • Experience with Medicaid home care, personal care services (PCS), or HCBS programs.
  • Knowledge of data governance frameworks, master data management (MDM), or data quality tooling.
  • Exposure to modern data stack technologies (e.g., dbt, Snowflake, Databricks, or similar).
  • Experience working with EVV data or similar real-time visit verification systems.
  • Familiarity with interoperability standards such as HL7, FHIR, or X12 EDI.

 

Success Measures (First 12–18 Months)

  • Clear, well-adopted data models across key clinical and financial workflows.
  • Measurable reduction in data-related defects impacting claims, payroll, and reporting.
  • At least one AI-driven product capability successfully launched on a trusted data foundation.
  • Improved reconciliation across payer, provider, and caregiver data.
  • Faster time-to-market for data-dependent product features.
  • Strong cross-team adoption of shared data standards and definitions.


\n

The base salary range for this US-based, full-time, and exempt position is $105,000-115/yr, not including variable compensation. An employee’s exact starting salary will be based on various factors including but not limited to experience, education, training, merit, location, and the ability to exemplify the HHAeXchange core values.

 

This is a benefits-eligible position. HHAeXchange offers competitive health plans, paid time-off, company paid holidays, 401K retirement program with a Company elected match, including other company sponsored programs.

 

HHAeXchange is an equal-opportunity employer. The Company offers employment opportunities to all applicants and employees without regard to race, color, religion, national origin, sex, sexual orientation, gender identity or expression, age, disability, medical condition, marital status, veteran status, citizenship, genetic information, hairstyles, or any other status protected by local or federal law.



Please mention the word **SUAVE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
serverless node.js api senior

Sobre Coderio

 

Coderio diseña y entrega soluciones digitales escalables para empresas globales. Con una base técnica sólida y una mentalidad orientada al producto, nuestros equipos lideran proyectos complejos desde la arquitectura hasta la ejecución. Valoramos la autonomía, la comunicación clara y la excelencia técnica, colaborando estrechamente con equipos y socios internacionales para construir tecnología que genera impacto.

🌍 Más información: http://coderio.com

Buscamos un/a backend engineer con criterio técnico propio, capaz de diseñar microservicios event-driven que soporten millones de requests sin parpadear. Responsable de la capa de servicios y pipelines de datos, disponibilizando telemetría crítica para analítica. Debes ser capaz de interactuar con criterio técnico frente a equipos de Data Engineering y diseñar soluciones escalables bajo presión.

Lo que puedes esperar de este rol (Responsabilidades)

 

Es un rol de ownership técnico total: diseñas, decides, construyes, operas y te haces responsable de dominios críticos de la plataforma.

 

Requisitos

+5 años en desarrollo Backend (Seniority basado en autonomía y proactividad).

+3 años de experiencia sólida con Node.js y TypeScript.

+3 años operando en entornos AWS Serverless (Lambda, API Gateway, SQS, SNS).

+2 años de experiencia en Data Engineering básica y modelado de bases de datos relacionales (PostgreSQL).

Deseable

+1 año de experiencia con TimescaleDB o bases de datos Time-series.

Experiencia previa en proyectos de IoT o telemetría industrial.

Conocimiento de infraestructura como código (Terraform/CDK).

 

Soft Skills

Ownership Extremo: Capacidad de tomar un dominio y llevar la resolución de punta a punta.

Comunicación de Criterio: Capacidad para desafiar y colaborar con stakeholders técnicos (Data Teams).

Proactividad: No espera instrucciones; identifica cuellos de botella y propone soluciones.

 

Beneficios

 

Modalidad remota

Participación en un proyecto estratégico regional de alto impacto.

Colaboración con un equipo internacional y liderazgo técnico sólido.

Oportunidad de crecimiento profesional dentro de proyectos de transformación digital.

 

¿Por qué unirte a Coderio?

 

Somos remote-first, apasionados por la tecnología, el trabajo colaborativo y la compensación justa. Ofrecemos un entorno inclusivo, desafiante y con oportunidades reales de crecimiento. Si te motiva construir soluciones con impacto en proyectos globales de finanzas y RRHH. Te estamos esperando. Postula ahora.

\n


\n

Please mention the word **MESMERIZINGLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Thoughtworks
  • Chicago
design security technical support
Senior data engineers at Thoughtworks are engineers who build, maintain and test the software architecture and infrastructure for managing data applications. They are involved in developing core capabilities which include technical and functional data platforms. They are the anchor for functional streams of work and are accountable for timely delivery. They work on the latest big data tools, frameworks and offerings (data mesh, etc.), while also being involved in enabling credible and collaborative problem solving to execute on a strategy. Job responsibilities You will develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You will develop intricate data processing pipelines, addressing clients' most challenging problems. You will collaborate with data scientists to design scalable implementations of their models. You will write clean, iterative code using TDD and leverage various continuous delivery practices to deploy, support and operate data pipelines. You will use different distributed storage and computing technologies from the plethora of options available. You will develop data models by selecting from a variety of modeling techniques and implementing the chosen data model using the appropriate technology stack. You will collaborate with the team on the areas of data governance, data security and data privacy. You will incorporate data quality into your day-to-day work. Job qualifications Technical Skills Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems. You have hands-on experience of data modeling and modern data engineering tools and platforms. You have experience in writing clean, high-quality code using the preferred programming language. You have built and deployed large-scale data pipelines and data-centric applications using any of the distributed storage platforms and distributed processing platforms in a production setting. You have experience wit

Please mention the word **UNBIASED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Engineering Manager
  • Hinge Health
  • Bengaluru
manager architect technical support

The Opportunity

Hinge Health is hiring an Engineering Manager for our Growth Data Platform (GDP) pod in Bangalore. This is a pivot-point role for a leader who is ready to move beyond traditional software management and lead a team into the era of AI-Native Engineering and ML-Driven Growth. The GDP pod is the engine room of Hinge Health's growth strategy. You own the data pipelines, event streams, and the emerging "Intelligence Layer" that powers every member interaction—from the first ad they see to the "Daily Streak" notification that keeps them pain-free. In 2026, your mission is to transform GDP from a data mover to a decision engine. You will partner with Data Science to operationalize high-value ML models (like our Direct Mail Propensity Model and Contextual Bandits) that autonomously decide the channel, content, and timing of our marketing. Simultaneously, you will pioneer our "Harness Engineering" initiative, transforming your pod's workflow from manual coding to managing autonomous AI agents that build, test, and verify our data infrastructure. You will lead a high-performing team in Bangalore, serving as the strategic bridge between SF Product Strategy and technical execution


What You’ll Accomplish

  • Build the "Intelligence Layer": Move beyond simple data piping. Architect the real-time decisioning layer that ingests ML signals (e.g., Churn Risk, Propensity to Convert) and routes them instantly to execution platforms like Iterable.

  • Operationalize Growth ML Models: Partner with Data Science to take predictive models out of the lab and into production. You will own "Phase 3" of the model lifecycle: hardening, serving, and monitoring models that control millions of dollars in marketing spend.

  • Lead the Transition to Harness Engineering: Drive the adoption of AI-native workflows (using tools like Cursor and Claude Code). Shift the team’s focus from "typing code" to building the test harnesses, specs, and safety rails that allow agents to autonomously maintain our pipelines.

  • Guarantee Data Trust ("Glass Box" Observability): Champion a culture of radical observability. Implement automated "data sentinels" and contract tests that catch schema violations and freshness issues before they impact our marketing campaigns.

Basic Qualifications

  • 2+ years of experience managing engineering teams. You are a "player-coach" who can build a "One Team" culture, bridging the gap between SF and Bangalore with high-agency leadership.

  • 3+ years of experience with data engineering technologies including experience with distributed data processing frameworks (e.g., PySpark, Databricks) and SQL.

  • Experience with production data pipelines and understanding of data lifecycle management, including pipeline orchestration, monitoring, and operational excellence practices.

Preferred Qualifications

  • ML Ops & Model Serving Experience: You understand the lifecycle of data and models. You have experience with Kafka and event-driven architectures, and you know what it takes to serve an ML model in production (latency, feature stores, drift monitoring).

  • AI-Forward Leadership: You are excited, not intimidated, by the shift to AI-assisted engineering. You are eager to experiment with new workflows where engineers act as architects and auditors of AI-generated code.

  • Architectural Rigor: You can simplify complex systems. You have a track record of converging "sprawling" pipeline patterns into robust standards (e.g., moving ad-hoc scripts into a unified Event-Driven Architecture).

  • Operational Excellence: You value SLOs, runbooks, and incident management. You believe that "production reliability" is a feature, especially when dealing with data that drives real-time member health decisions.

  • Experience with Marketing Tech (Iterable, Braze) or Customer Data Platforms (Segment, Hightouch).

  • Experience implementing Contextual Bandits or similar experimentation frameworks.

  • Background in Healthcare/HIPAA compliant environments.

About Hinge Health

At Hinge Health, we’re using technology to scale and automate the delivery of healthcare – starting with musculoskeletal (MSK) conditions, which affect over 1.7 billion people worldwide. With an AI-powered human-centered care model, Hinge Health leverages cutting-edge technology to improve outcomes, experiences and costs to help people move beyond their pain. The platform addresses a broad spectrum of MSK care – from acute injury, to chronic pain, to post-surgical rehabilitation – through personalized, evidence-based care.

As the preferred partner to 50+ health plans, PBMs and other ecosystem partners, Hinge Health is available to over 20 million people across more than 2,550 employers. The company is headquartered in San Francisco with additional offices in Montreal and Bangalore. Learn more at http://www.hingehealth.com.

Hinge Health Hybrid Model

We believe that remote work and in-person work have their own advantages and disadvantages, and we want to be able to leverage the best of both worlds. Employees in hybrid roles are required to be in the office 3 days/week.

This is a Bengaluru-based role that involves regular interaction and collaboration with Hinge Health colleagues in San Francisco, CA. Time zones: San Francisco is the Pacific Time Zone, which is 12 hours and 30 minutes behind India Standard Time – for example, 8am in San Francisco is 8:30pm in Bengaluru. Standard working hours in San Francisco are between 8am - 6pm. For this role, applicants should be open to meetings in the late evening following India Standard Time.

What You'll Love About Us

  • Inclusive healthcare and benefits: In addition to comprehensive medical, dental, and vision coverage, we provide employees and their family members with Group Medical Coverage (GMC), Group Term Life Insurance (GTL), and Group Personal Accident Insurance (GPA).

  • We also offer a lifestyle stipend to support your overall well-being, along with learning and development opportunities to help you grow both personally and professionally.

  • Grow with us through discounted company stock through our ESPP with easy payroll deductions.

Culture & Engagement

Hinge Health is an equal opportunity employer and prohibits discrimination and harassment of any kind. We make employment decisions without regards to race, color, religion, sex, sexual orientation, gender identity, national origin, age, veteran status, disability status, pregnancy, or any other basis protected by federal, state or local law.

By submitting your application you are acknowledging we are using your personal data as outlined in the personnel and candidate privacy policy.

.


Beware of Phishing Attempts: We've noticed an increase in phishing where fraudsters impersonate employees and send fake job offers to steal sensitive information. We'll never ask for financial details during the hiring process and only use "@hingehealth.com" emails. If you receive a suspicious offer, stop communication and report it to the US FBI Internet Crime Complaint Center. To verify an email from our recruiting team, forward it to security@hingehealth.com.



Please mention the word **BELIEVABLE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Machine Learning Engineer
  • Radformation
  • Remote
design support software code

About Radformation

Radformation is transforming the way cancer clinics deliver care. Our innovative software automates and standardizes radiation oncology workflows, enabling clinicians to plan and deliver treatments faster, safer, and more consistently, so patients everywhere can receive the same high-quality care.

Our software focuses on three key areas:

  • Time savings through automation.
  • Error reduction through automated systems.
  • Increased quality care through advanced algorithms and workflows.

We are a fully remote, mission-driven team united by a shared goal: to reduce cancer’s global impact and help save more of the 10 million lives it claims each year. Every line of code, every product release, and every conversation with our customers brings us closer to ensuring no patient’s treatment quality depends on where they live.

Why This Role Matters

In this role you will help advance Radformation’s AI-driven radiotherapy products by building and improving machine learning models that directly impact clinical workflows and patient outcomes.

You will work closely with AI, cloud, research, and product teams to develop scalable data pipelines, improve model performance, and support regulatory submissions for medical device software.

Responsibilities Include:

  • Design, build, and maintain robust ETL pipelines to support AI model development and deployment.
  • Develop, train, and optimize machine learning models used in radiotherapy software.
  • Collaborate with product and research teams to bring new AI-driven features and algorithms into production.
  • Support FDA submissions by contributing to documentation, validation, and regulatory processes.
  • Participate in design reviews, risk analyses, and cross-functional discussions to ensure safe and effective products.
  • Mentor junior engineers and data scientists and contribute to a collaborative team environment.

Required Experience:

  • MS in Computer Science, Mathematics, Statistics, or a related field with 3+ years of experience.
  • Expert-level proficiency in Python.
  • Hands-on experience building, training, and tuning machine learning models.
  • Strong experience with PyTorch and/or TensorFlow.
  • Experience developing convolutional neural networks, including U-Net architectures.
  • Experience using Git and modern code repositories (GitHub, Bitbucket, Azure DevOps, etc.).

Preferred Experience:

  • Experience with medical imaging and image processing techniques (segmentation, resampling, smoothing).
  • Familiarity with clinical data standards such as DICOM or HL7.
  • Experience working in regulated environments (HIPAA, FDA, or medical device software).
  • Experience with modern AI-assisted development tools (e.g., Cursor, Claude Code, Codex).

AI & Hiring Integrity

At Radformation we believe AI can be an incredible tool for innovation, but our hiring process is all about getting to know you, your skills, experience, and unique approach to problem solving. We ask that all interviews and assessments be completed without tools that generate answers in real time. This helps ensure a fair process for everyone and allows us to see your authentic work. Using such tools during the process may affect your candidacy.

Benefits & Perks — What Makes Us RAD

We care about our people as much as we care about our mission. We offer competitive compensation, benefits, and the opportunity to make an impact in the fight against cancer. The salary range for this role is $160,000 - $200,000 USD base, plus bonus eligibility.

For US teammates (via TriNet):

Health & Wellness

  • Multiple high-quality medical plan options with substantial employer contributions toward premiums, often covering the full cost depending on the plan selected.
  • Health coverage starting on day one
  • Short-term and long-term disability and supplementary life insurance

Financial & Professional Growth

  • 401(k) with employer match vested immediately
  • Annual reimbursement for professional memberships
  • Conference attendance and continued learning opportunities

Work-Life Balance & Perks

  • Self-managed PTO and 10 paid holidays
  • Monthly internet stipend
  • Company-issued laptop and one-time home office setup stipend
  • Fully remote work environment with virtual events and yearly retreats, because we like to have fun while doing work that matters

For global teammates (via Deel):
At Radformation, we want every team member to feel supported, no matter where they live. For teammates outside the US, we provide benefits that align with local laws and standards, working with our Employer of Record (EOR) partners to ensure fairness and equity. This means your benefits package will be locally compliant, competitive, and designed to support your health, financial security, and work-life balance.

Our Commitment to Diversity

Cancer affects people from every walk of life, and we believe our team should reflect that diversity. Radformation is proud to be an equal opportunity workplace and an affirmative action employer. We welcome candidates from all backgrounds and are committed to fostering an inclusive environment for all employees.

Agency & Candidate Safety Notice

Radformation does not accept unsolicited resumes from agencies without a signed agreement in place. We do not partner with third-party recruiters unless explicitly stated. All legitimate communication from Radformation will come from an @radformation.com email address. If you receive outreach from another domain or via unofficial channels, please contact careers@radformation.com.

\n


\n

Please mention the word **EBULLIENTLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Engineering Manager Data Platform
  • TrueML
  • Remote in USA
manager design system python

Why TrueML?

 

TrueML is a mission-driven financial software company that aims to create better customer experiences for distressed borrowers. Consumers today want personal, digital-first experiences that align with their lifestyles, especially when it comes to managing finances. TrueML’s approach uses machine learning to engage each customer digitally and adjust strategies in real time in response to their interactions.

 

The TrueML team includes inspired data scientists, financial services industry experts and customer experience fanatics building technology to serve people in a way that recognizes their unique needs and preferences as human beings and endeavoring toward ensuring nobody gets locked out of the financial system.


About This Role:

As the Engineering Manager for our Data Platform, you will be the primary architect of the ecosystem that powers TrueML’s intelligence. We are currently in a phase of purposeful scaling, and we need your leadership to build a rock-solid, high-performing data foundation that bridges the gap between raw infrastructure and actionable insights. Your goal is to champion data integrity and technical excellence while leading a world-class team during this period of deliberate expansion.

\n


What You'll Do:
  • Empower a Talented Team: Lead, manage, and mentor a group of data engineers, fostering their career development and championing a culture of technical excellence.
  • Architect Resilient Infrastructure: Own the design and development of data pipelines and systems to ensure they are prepared for company-wide expansion.
  • Champion Data Trust: Act as a relentless advocate for data quality by implementing the system controls and SLAs necessary for flawless production processes.
  • Collaborate Strategically: Partner cross-functionally with Data Science and Product managers to translate complex business needs into efficient, well-documented data models.
  • Maintain Technical Excellence: Perform high-impact code reviews and provide critical guidance to optimize ETL pipelines and schema performance.
  • Balance Leadership with Craft: Contribute directly to development work and troubleshooting alongside your team when the mission requires it.
  • Drive Data Accessibility: Ensure data is a true business enabler by making it reliable and easily accessible for stakeholders across the company.


Who You Are:

An Experienced Leader: You have 2+ years of hands-on management experience and 5+ years of relevant data engineering expertise, with a track record of growing teams through coaching.

- A Big Data Expert: You have deep familiarity with modern technologies like Snowflake, Airflow, BigQuery, or Redshift, and mastery of both RDBMS and NoSQL databases.

- A Master of the Stack: You possess advanced proficiency in Python or Java and expert-level SQL skills, specifically in scaling schemas and tuning ETL performance.

- A Systems Thinker: You have extensive experience designing data warehouses and workflow systems, including owning SLAs for critical production processes.

- An Elite Communicator: You are a natural bridge-builder who can translate deep technical hurdles into clear, actionable updates for business partners.

- Purpose-Driven: You thrive in environments that value intentional progress and are excited to mature a data ecosystem from the ground up.

- Bonus Skills: You bring experience with Spark, Scala, or Protocol Buffers, or you have navigated the unique regulatory challenges of the FinTech industry.


\n
$111,700 - $148,900 a year
Compensation Disclosure: This information reflects the anticipated base salary range for this position based on current national/regional data. Minimums and maximums may vary based on location. Individual pay is based on skills, experience, and other relevant factors.
\n

We are a dynamic group of people who are subject matter experts with a passion for change. Our teams are crafting solutions to big problems every day. If you’re looking for an opportunity to do impactful work, join TrueML and make a difference.

 

Our Dedication to Diversity & Inclusion

 

TrueML and TrueAccord are equal opportunity employers. We promote, value, and thrive with a diverse & inclusive team. Different perspectives contribute to better solutions and this makes us stronger every day. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.


For California Applicants: we collect personal information for employment purposes. We do not sell personal information. Most of the information we have is provided to us by you and/or collected as part of the employment process. For more details on how we use, share, and delete personal information see our Privacy Policy.



Please mention the word **EXULTINGLY** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Sr Software Engineer B E
  • Rebuy, Inc.
  • Remote
software design jira saas

The Company You’ll Join

At Rebuy, we’re on a mission to revolutionize shopping with intelligent, personalized experiences that wow customers around the globe. As a fully remote team, we power some of the fastest-growing DTC brands like Aviator Nation, Liquid Death, Magic Spoon, Blenders, Laird Superfoods, Primal Kitchen, and many more.

We believe in ownership, drive, and empathy, and strongly uphold that every team member plays a vital role in shaping the future of intelligent commerce. Our culture thrives on collaboration, creativity, and genuine passion. We don’t just build great tech - we build lasting partnerships, a strong community, and a place where people love to work.

The Problems You’ll Solve

Rebuy and its team members continually strive to create a high-spirited, intentional work environment that stresses performance, productivity, collaboration, and merit.

As a Sr. Software Engineer, Back-End, you’ll own some of the most consequential systems at Rebuy. Your primary anchor is our billing and payments infrastructure — the engine that determines how merchants are charged, how partners get paid, and how financial balances flow across our entire product suite. This is genuinely complex financial engineering. It requires deep PHP and Go expertise, careful architecture, and judgment that no automated tool can replicate. Merchant billing runs daily, touches real revenue, and demands someone who understands both the technical and business dimensions of every decision.

Alongside billing, you’ll grow into a broader platform portfolio — the partner portal, data ETL pipelines, customer-facing APIs, and reporting infrastructure that power the business. And in the near term, you’ll play a critical role in a significant technical migration: moving our legacy Code Igniter 2 codebase to Code Igniter 4, including work tied to increasing our enterprise market share. This migration requires hands-on PHP expertise and cannot be deferred.

You won’t be handed a sprawling list of things you must do on day one. You’ll be trusted to grow into this role — and rewarded when you do.

  • Billing & Payments Architecture: Design and build Rebuy’s centralized billing system that handles merchant billing, partner payments, and customer-facing charges. Architect the integration layer that allows payment balances to be applied across Rebuy’s full suite of services. Tackle genuinely complex financial engineering challenges with PHP and Go at scale.

  • Build Robust APIs: Design and implement secure, well-structured APIs in PHP and Go to power billing events, payment processing, and financial data flows across our platform and Shopify integrations.

  • Legacy Modernization: Lead and contribute to the migration of our Code Igniter 2 codebase to Code Igniter 4. This is high-priority, near-term work with real business dependencies — including enterprise partnership commitments — and requires a PHP engineer with the experience and judgment to do it right.

  • Agentify the Platform: Partner with product and engineering to identify where AI agents can automate workflows, surface insights, and guide merchants through our product. Build the backend systems — APIs, data pipelines, and event hooks — that enable intelligent automation. This is genuinely new territory and one of the most exciting growth vectors for Rebuy’s product.

  • Platform Breadth: Our team owns more than billing and payments — we also support a partner portal, data ETL pipelines, customer-facing reporting APIs, and the infrastructure that makes data flow reliably across the business. You won’t be responsible for all of it on day one, but you’ll have genuine opportunities to grow into the areas that most interest you. Engineers here don’t get siloed; they get context.

  • Engineering Best Practices: Contribute significantly to the engineering culture at Rebuy by establishing, documenting, and promoting best practices. Lead initiatives to introduce and standardize frameworks and tools that increase development efficiency and maintainability.

  • Security & Compliance: Stay current with the latest security trends, vulnerabilities, and best practices as they apply to billing and payment systems. Champion security-first engineering across authentication, authorization, data encryption, and compliance considerations in everything you build.

  • PHP Technical Leadership: Serve as a key technical anchor for PHP across the engineering organization. Rebuy’s codebase has significant PHP depth and relatively few engineers with that expertise. You’ll lead code reviews, share knowledge actively, and help raise the PHP competency of the broader team.

  • Quality Assurance: Conduct quality checks on deliverables to ensure code, setup, and configurations meet expected results. Ensure that all features meet high standards of quality and performance before deployment.

  • Team Collaboration: Engage actively in building a strong team culture. Work closely with the Product Owner, Engineering Manager, and peers across billing, payments, partner tools, and data infrastructure to define requirements, estimate effort, and drive solutions forward. This is a team where your voice matters — you won’t just be handed tickets. Assist the Support team in triaging and resolving high-priority production issues.

Technologies We Use:

  • AI: Anthropic Enterprise Claude Code / Co-work, Cursor, Adhoc AI tools budget.

  • Frontend Technologies: React, TypeScript, GraphQL, VueJS, Angular

  • Backend technologies: PHP, GO, MySQL, BigTable, Elasticsearch

  • Other Tools: Jira, Bitbucket, Confluence, Google Suite, Slack, One Password, Notion


Who You Are

We’re stoked to meet you and get to learn more about you, your experience and your interest in joining our team.

The Hard Skills:

  • Experience building or maintaining billing, payments, or financial systems — including working with payment processors, subscription engines, invoicing pipelines, or similar financial infrastructure in a production SaaS environment.

  • Educational background in CS // Engineering or a similar area.

  • 5+ years of hands-on experience building backend applications with PHP and Go, with a proven track record of delivering complex, high-traffic systems.

  • Experience designing and implementing secure, scalable, and maintainable RESTful APIs in PHP and Go, with a deep understanding of API design patterns, versioning, and performance optimization.

  • Experience with cloud-based technologies, preferably GCP.

  • Strong understanding of a performant SaaS environment.

  • Experience in a Scrum/Agile environment.

  • Experience with the Atlassian suite, including Jira and Bitbucket.

  • Solid understanding of security fundamentals as they apply to backend and financial systems — including secure coding practices, authentication/authorization patterns, data encryption, and awareness of current vulnerability trends (e.g. OWASP Top 10)

The Soft Skills:

  • A collaborative mindset and work approach with the ability to lead projects and mentor others.

  • The ability to thrive in a fast-paced environment with a high level of autonomy and responsibilities.

  • Excellent communication skills, especially being able to explain technical concepts to both technical and non-technical audiences.

  • Genuinely curious about the intersection of engineering and business. You care about the downstream impact of what you build — not just that the code works, but that it moves the company forward.

Who You’ll Meet With

Now let’s get into who you’ll meet during our interview process! After you submit your application and it’s been reviewed by our team, we will reach out to you inviting you to meet with us. From there, you can expect an interview process similar to this:

  • An introductory call with someone from the Talent Acquisition team for about 30 min.

  • Interview with the Hiring Manager to learn more about you and answer your questions about Rebuy and this role

  • A coding challenge and white boarding exercise to show us your skillset during a live panel interview with a few team members.

  • Short final interview with our CEO and COO where you’ll get to learn more about Rebuy.

The Perks You’ll Enjoy

Rebuy is a fully remote company across the U.S. and Canada that aims to provide all of our team with the resources, support and flexibility they need to thrive in their roles.

  • Team: We’ve got the best, brightest, most brilliant team members who are excited to meet you! We also like to think we have a good sense of humor.

  • Remote Work: With a strong internet connection, you’re able to work from anywhere within the U.S. and Canada.

  • PTO: We offer a flexible vacation policy, generous holiday schedule, parental leave and sick policy. There’s other policies too like a birthday holiday!

  • Amazing Benefits: 100% free health, dental, and insurance for you and your family. Don’t worry, there’s even more!

  • Retirement Plans: For our U.S. employees we offer 401(k) retirement plans and for our Canadian employees we offer a TFSA and RRSP retirement plans. You’ll also enjoy a 3% contribution of your gross salary, no matter where you’re located!

Our compensation reflects the cost of labor across several U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $130,000 - $180,000 USD annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter and hiring manager can share more about the specific salary range for the job location during the hiring process.

Disclosures:

Equal Opportunity Statement

Rebuy, Inc. is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law.

Rebuy, Inc. aims to make rebuyengine.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email hr@rebuyengine.com.



Please mention the word **SUPPORTER** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$91455 - $137273 Full time
redis sysadmin technical support

Who we are

We're Redis. We built the product that runs the fast apps our world runs on. (If you checked the weather, used your credit card, or looked at your flight status online today, you’re welcome.) At Redis, you’ll work with the fastest, simplest technology in the business—whether you’re building it, telling its story, or selling it to our 10,000+ worldwide customers. We’re creating a faster world with simpler experiences. You in?

Why would you love this job?

As a Technical Support Engineer, you will be responsible for helping customers by diagnosing and resolving complex technical issues in a high-contribution role with exciting technical challenges, ongoing learning, and the excitement of helping name-brand customers as part of our fun, tight-knit team.

In this role, you will use and extend your existing technical depth and increase your technical breadth by addressing complex problems for the top companies in the world. You will level up to be an expert complex problem solver on Redis Enterprise Software, being used as a high-performance database by thousands of worldwide customers. You will dive deep into different exciting forefront technologies by supporting Redis Enterprise running on the top Cloud Platforms and in the top container orchestration platforms.

Join the best of the best and continuously learn new things. We are looking for brilliant experts who are curious, persistent, and happy digging through the full stack, from code to Sysadmin to networking to performance. If this sounds like you, please check out the technical foundation we’d like you to bring.

What you’ll do:

  • Work with customers to troubleshoot and resolve complex software issues:

    • Reproduce issues, replicating customer environments as needed.

    • Document issues and contribute to our internal team documentation.

    • Provide Root Cause Analysis

  • Collaborate with Engineering as needed to provide solutions.

  • Analyze performance questions that may arise along the data path (including networks) for deployments that may be in the Cloud or On-premises.

  • Provide technical expertise during testing, deployment, and upgrading of Redis software.

  • Manage critical customer issues, facilitating communication between customers, CloudOps, Engineering, Product, TAMs, and Sales.

  • Serve as the customer advocate for timely resolution of issues and handling escalations while helping customers realize and maximize the value of their Redis subscription.

  • Participate in new product development, customer training, and other support-related activities.

This role requires a 5-day work week that includes Saturday and Sunday.

What will you need to have?

  • At least five years of technical experience as a Support Engineer, Systems Engineer, Software Engineer, or Site Reliability Engineer in an enterprise software company

  • At least four years of experience troubleshooting real-time production systems

  • At least two years of hands-on experience with cloud infrastructure.

  • Strong background in scripting or programming languages (Python, Java, C#, JavaScript, Bash, Powershell, etc.)

  • Expert working knowledge in Linux/Unix and networking (TCP/IP)

  • Professional experience working with networking tools like wireshark, tcpdump, etc.

  • Experience in analyzing and debugging production issues at scale.

  • Experience with alerting and monitoring systems (Prometheus, Grafana, ELK, Splunk, etc.).

  • Working knowledge of Cloud-based and On-premises environments

  • Proficiency in communication and presentation, both written and verbal (in English)

  • Strong technical background with excellent problem-solving and multi-tasking skills

  • High availability and commitment to customers at any time

Extra great if you have:

  • Bachelor of Science in Computer Science or Information Systems

  • Experience with NoSQL databases (especially Redis)

  • Experience working with container orchestration environments, such as Kubernetes

The estimated gross base annual salary range for this role is $91,455 – $137,273 per year in New York, California, Washington, Colorado, and Rhode Island. Actual compensation may vary and is dependent on various factors, including a candidate’s work location, qualifications, experience, and competencies. Base annual salary is one component of Redis’ total compensation and competitive benefits package, which may include 401(k), unlimited time off, learning and development opportunities, and comprehensive health and wellness benefits. This role may include discretionary bonuses, stock options, commuter benefits based on location, or a commission plan. Salary history is not used in compensation package decisions. Redis utilizes market pay data to determine compensation, so posted compensation ranges are subject to change as new market data becomes available.

As a global company, we value a culture of curiosity, diversity of thought, and innovation from our employees, customers, and partners. Redis is committed to a diverse and inclusive work environment where all employees’ differences are celebrated and supported, and everyone feels safe to bring their authentic selves to work. Redis is dedicated to equal employment opportunities regardless of race, color, ancestry, religion, sex, national orientation, sexual orientation, age, marital status, disability, gender identity, gender expression, Veteran status, or any other classification protected by federal, state, or local law. We strive to create a workplace where every voice is heard, and every idea is respected.

Redis is committed to working with and providing access and reasonable accommodation to applicants with mental and/or physical disabilities. If you think you may require accommodations for any part of the recruitment process, please send a request to recruiting@redis.com. All requests for accommodations are treated discreetly and confidentially, as practical and permitted by law.

Any offer of employment at Redis is contingent upon the successful completion of a background check, consistent with applicable laws.

Redis reserves the right to retain data longer than stated in the privacy policy in order to evaluate candidates.



Please mention the word **EASED** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
software growth code payroll

Who We Are

Wingspan is the first payroll platform designed specifically for independent contractors and their businesses. We simplify onboarding, payments, and compliance for flexible workforces of all sizes, from solo operators to large enterprises. 

We're a Series B startup based in NYC with distributed teams in the USA, Poland, and the UK, and backed by Andreessen Horowitz (a16z), Touring Capital, and a strong network of operators, including the CEOs and founders of Warby Parker, Harry's, Allbirds, Invision, and Flatiron Health.

About the Role

As a Software Engineer on the Payment Operations team, you will be responsible for the execution layer that ensures every dollar on Wingspan's platform is accounted for, reconciled, and moved accurately on time. You will have direct access to production systems, a mandate to identify what's broken or inefficient, and the authority to engineer the fix. 

This role reports to the Head of Payments & Compliance Operations and is based in Warsaw, Poland, with a remote work model.

What You'll Do

  • Design, develop, and ship internal systems and automation that eliminate entire categories of operational toil, owning every problem end-to-end from initial diagnosis to permanent fix
  • Build and maintain reconciliation infrastructure that keeps Wingspan's ledger, bank records, and platform transaction data in continuous alignment, automatically and at scale
  • Develop monitoring and alerting systems that surface funding health issues and payment anomalies in real time, ensuring problems are caught and resolved before they ever reach a customer
  • Collaborate with Engineering, Product, and Finance to identify recurring operational patterns and translate them into platform-level improvements that raise the reliability ceiling for the entire system
  • Contribute to the growth of our engineering culture by sharing knowledge, participating in code reviews, and proactively identifying opportunities to improve how the team builds, observes, and automates

Qualifications & Requirements

  • 3+ years of experience in a software engineering or engineering-adjacent role with exposure to payment systems, backend services, or data pipelines
  • Strong SQL skills, comfortable writing standalone scripts and using AI tools such as Claude Code, Open AI, etc 
  • Familiarity with RESTful APIs and backend services, with Node.js an

    Please mention the word **FREE** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$127000 - $159000 Full time
software react system security

About Equip 

Equip is the leading virtual, evidence-based eating disorder treatment program on a mission to ensure that everyone with an eating disorder can access treatment that works. Created by clinical experts in the field and people with lived experience, Equip builds upon evidence-based treatments to empower individuals to reach lasting recovery. All Equip patients receive a dedicated care team, including a therapist, dietitian, physician, and peer and family mentor. The company operates in all 50 states and is partnered with most major health insurance plans. Learn more about our strong outcomes and treatment approach at www.equip.health.

Founded in 2019, Equip has been a fully virtual company since its inception and is proud of the highly-engaged, passionate, and diverse Equisters that have created Equip’s culture.  Recognized by Time as one of the most influential companies of 2023, along with awards from Linkedin and Lattice, we are grateful to Equipsters for building a sustainable treatment program that has served thousands of patients and families.

About the role:

Equip's engineering culture emphasizes agility, collaboration, and ownership, fostering a team of problem-solvers who build a robust, scalable healthcare platform. As a Senior DevOps Engineer, you'll be crucial in developing and maintaining infrastructure, platforms, and developer tools, including CI/CD pipelines, cloud infrastructure, and observability tools, to enable efficient development and scaling. You'll also support web (Java, React, PostgreSQL) and mobile (React Native) applications, standardizing AWS deployments and CI/CD practices. The role will involve building security, metrics, logging, and deployment tooling to ensure system reliability and scalability. Our goal is to create intuitive, reliable systems that allow engineers to iterate quickly and deliver value to patients, with direct user feedback driving our highest-impact work.

Responsibilities:

  • Design and build a robust, scalable cloud platform to empower web and data engineering teams to deliver high-quality applications.

  • Partner with engineering and data teams to improve developer velocity, ensure system reliability, and embed operational excellence.

  • Lead best practices in cloud infrastructure architecture, CI/CD automation, monitoring, and backend systems reliability.

  • Develop tools and automation of a variety of frameworks and languages to enhance the performance, availability, and scalability of services.

  • Contribute to a culture of continuous improvement through proactive monitoring, root cause analysis, and knowledge sharing.

  • Perform other duties as assigned.

Qualifications:

  • Bachelor's degree or equivalent training and work experience in Computer Science, Software Engineering, or a related field

  • 5–10 years of experience in DevOps, SRE, Platform Engineering, or Software Engineering roles.

  • Deep expertise in AWS and its ecosystem of services.

  • Proven track record building cloud infrastructure using Infrastructure as Code (Terraform, CloudFormation)

  • Strong experience with container orchestration and serverless architectures, including ECS/Fargate and Docker

  • Solid understanding of AWS networking concepts, including VPCs, subnets, security groups, route tables, and load balancers.

  • Hands-on experience creating and maintaining CI/CD pipelines (e.g., CircleCI, GitLab CI, etc.).

  • Strong experience with scalable backend systems, including microservices, APIs, caching layers, and various databases.

  • Experience deploying and managing React and other JavaScript applications using AWS services like CloudFront and S3.

  • Experience setting up comprehensive monitoring and alerting for infrastructure, services, and data pipelines.

  • Skilled at identifying, diagnosing, and preventing production issues through effective observability and troubleshooting (NewRelic, DataDog)

  • Commitment to building secure systems with best practices in access control, encryption, and secure deployment pipelines.

  • Experience communicating and collaborating with engineering and product team stakeholders.

  • Proven ability to manage multiple projects with competing priorities.

  • Be able to work Eastern or Central time zones. Either 9 - 5 Eastern or 8 - 4 Central.

Benefits

Time Off:

  • Flex PTO policy (3-5 wks/year recommended) + 11 paid company holidays.

Medical Benefits:

  • Competitive Medical, Dental, Vision, Life, and AD&D insurance.

  • Equip pays for a significant percentage of benefits premiums for individuals and families.

  • Maven, a company paid reproductive and family care benefit for all employees.

  • Employee Assistance Program (EAP), a company paid resource for mental health, legal services, financial support, and more!

Other Benefits

Work From Home Additional Perks:

  • $50/month stipend added directly to an employee’s paycheck to cover home internet expenses.

  • One-time work from home stipend of up to $500.

Physical Demands

Work is performed 100% from home with requirement to travel once or twice a year for in-person meetings. This is a stationary position that requires the ability to operate standard office equipment and keyboards as well as to talk or hear by telephone. Sit or stand as needed.

#LI-Remote

At Equip, Diversity, Equity, Inclusion and Belonging (DEIB) are woven into everything we do. At the heart of Equip’s mission is a relentless dedication to making sure that everyone with an eating disorder has access to care that works regardless of race, gender, sexuality, ability, weight, socio-economic status, and any marginalized identity. We also strive toward our providers and corporate team reflecting that same dedication both in bringing in and retaining talented employees from all backgrounds and identities. We have an Equip DEIB council, Equip For All; also referred to as EFA. EFA at Equip aims to be a space driven by mutual respect, and thoughtful, effective communication strategy - enabling full participation of  members who identify as marginalized or under-represented and allies, amplifying diverse voices, creating opportunities for advocacy and contributing to the advancement of diversity, equity, inclusion, and belonging at Equip.

As an equal opportunity employer, we provide equal opportunity in all aspects of employment, including recruiting, hiring, compensation, training and promotion, termination, and any other terms and conditions of employment without regard to race, ethnicity, color, religion, sex, sexual orientation, gender identity, gender expression, familial status, age, disability, weight, and/or any other legally protected classification protected by federal, state, or local law. 

Our dedication to equitable access, which is core to our mission, extends to how we build our "village." In line with our commitment to Diversity, Equity, Inclusion, and Belonging (DEIB), we are dedicated to an accessible hiring process where all candidates feel a true sense of belonging. If you require a reasonable accommodation to complete your application, interview, or perform the essential functions of a role, we invite you to reach out to our People team at accommodations@equip.health.

#LI-Remote



Please mention the word **COMPLEMENTS** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
system frontend full-stack architect

Join Hostinger, and we’ll grow fast! 🚀


We’re shaping the future of online success - powered by AI and driven by people. With 900+ talented professionals and over 4 million clients in 150 countries, we help creators and entrepreneurs bring their ideas to life faster and easier than ever before.


Our mission: To provide tools that help individuals and small businesses succeed online faster and easier.

Our culture: Guided by 10 company principles.

Our formula for success: Customer obsession, innovative products, and talented teams.


Your role at Hostinger


Join Hostinger’s Delivery Automation team as a Senior Full Stack Automation Engineer, where you’ll focus on building scalable internal platforms and tools that supercharge developer productivity, streamline software delivery, and automate complex manual flows across the company.


In this role, you’ll take ownership of designing and automating workflows that reduce friction for engineers and teams across Hostinger. From CI/CD pipelines and deployment automation to system integrations and cross-team process improvements - your work will enable faster delivery, greater efficiency, and a stronger automation-first culture.

Your impact will span Product, Engineering, and beyond: empowering developers with reliable self-service solutions, helping teams eliminate repetitive tasks, and ensuring Hostinger operates at scale with speed and confidence.


You’ll collaborate closely with stakeholders across engineering and other departments to understand their challenges, architect resilient solutions, and ship intuitive tools backed by robust backend systems. You’ll also explore and adopt emerging technologies - including AI - to continuously elevate developer experience and automation capabilities.


Curious to learn more? Connect with your team:

Mantas Gurskis - Automation Team Lead, Asta Dagienė - Head of Delivery

\n


Your day-to-day
  • Analyze stakeholders workflows, identify automation opportunities, design, build, and maintain full-stack automation tools that connect and enhance internal marketing, sales, and business systems.
  • Develop user-friendly internal UIs and dashboards for campaign setup, monitoring, and reporting.
  • Work closely with cross-functional teams to understand workflows and identify automation opportunities.
  • Leverage AI where applicable to optimize decision-making and workflow efficiency.
  • Ensure reliability, scalability, and maintainability of automation systems and infrastructure.


Your skills and experience
  • 3+ years of experience as a Full Stack Developer (Node.js, TypeScript preferred) with backend-heavy contributions.
  • Strong understanding of API design, data pipelines, databases, and frontend development (Vue or similar).
  • Business automation platforms (e.g., Zapier, n8n) is a plus.
  • Comfortable working closely with non-engineering teams to build usable, effective tools.
  • Bonus: experience integrating AI/ML tools into automation workflows.
  • You’re proactive, thrive in ambiguity, and enjoy solving problems that unlock leverage for others.


Benefits for you
  • 🚀 360 Growth: We provide limitless learning opportunities: access to platforms like Reforge and Scribd, global conferences, physical and digital libraries, feedback culture, and mentoring through TesoXchange. Advance your career with internal mobility and grow with a team eager to share knowledge and support your success.
  • 🎯 Freedom & responsibility: Work on your terms: from modern offices in Kaunas and Vilnius, the comfort of home, or anywhere in the world. Enjoy flexibility in managing your schedule and bring your ideas to life in a fast-paced, dynamic environment.
  • 💪Wellness simplified: Your health comes first with insurance from Day 1, gym memberships, recharge leave, and regular health checks. Join sports, arts, and hobby clubs or simply enjoy the balance of a lifestyle that prioritizes wellness.
  • 🎉 Work hard - play hard: Recognize hard work with company events like Summerfest & Winterfest, Town Hall, Meet the Client initiatives, team-buildings, and workations. Enjoy access to the Žalgiris Arena VIP Lounge and celebrate life’s big moments with milestone gifts for weddings, new parenthood, and graduations.


Compensation
  • Gross salary 5600 - 7600 EUR.


\n

Get ready to take your personal and professional growth to new heights! Join Hostinger today and be part of our journey 🚀

Three. Two. Onboard



Please mention the word **PLENTIFUL** and tag RMTU3LjI0NS4yNDcuMTE4 when applying to show you read the job post completely (#RMTU3LjI0NS4yNDcuMTE4). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
software architect technical testing
Come build at the intersection of AI and fintech. At Ocrolus, we're on a mission to help lenders automate workflows with confidence—streamlining how financial institutions evaluate borrowers and enabling faster, more accurate lending decisions. Our AI workflow and analytics platform for lenders is trusted at scale, processing nearly one million credit applications every month across small business, mortgage, and consumer lending. By integrating state-of-the-art open- and closed-source AI models with our human-in-the-loop verification engine, Ocrolus captures data from financial documents with over 99% accuracy. Thanks to our advanced fraud detection and comprehensive cash flow and income analytics, our customers achieve greater efficiency in risk management, and provide expanded access to credit—ultimately creating a more inclusive financial system. Trusted by more than 400 customers—including industry leaders like Better Mortgage, Brex, Enova, Nova Credit, PayPal, Plaid, SoFi, and Square—Ocrolus stands at the forefront of AI innovation in fintech. Join us, and help redefine how the world's most innovative lenders do business. We are looking for an exceptionally skilled Senior Software Engineer - Backend with a solid technical background and leadership skills, able to work in a fast-paced environment, and help architect and build the next generation of our backend applications. What you'll do: - Designing, implementing, and maintaining Microservices using Python. - Designing and developing cloud based software products conforming to industry best practices. - Build systems, services, and tools to handle new Ocrolus products and business requirements that securely scale over millions of transactions. - Build and scale our fast-growing online services and data pipelines. - Collaborate with other teams on security, reliability, and automation. - Supporting the testing process, troubleshooting issues and resolving them.

Please mention the word **REFORMS** and tag RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3 when applying to show you read the job post completely (#RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$70000 - $80000 Full time
Data Analyst
  • Criptoro
  • Remote
other analyst crypto defi

We are a Web3-driven company building decentralized products and working with blockchain data to create transparent and data-informed solutions. We are looking for a Junior Data Analyst who is curious about blockchain, crypto, and decentralized ecosystems


Responsibilities

  • Collect, clean, and analyze on-chain and off-chain data
  • Work with blockchain datasets (transactions, wallets, smart contracts)
  • Build dashboards to track key metrics (users, transactions, TVL, etc.)
  • Identify trends in user behavior and protocol performance
  • Support product, marketing, and token strategy teams with insights


  • Write SQL queries and work with data pipelines

  • Requirements

  • Education : Bachelor’s degree in Mathematics, Statistics, Economics, Computer Science, or a related field


    Technical Skills:


  • Basic knowledge of SQL
  • Proficiency in Excel / Google Sheets
  • Basic Python (pandas, numpy)
  • Understanding of data analysis and statistics


  • Familiarity with BI tools (Tableau, Power BI, or similar)
  • Web3 / Crypto (Preferred):

  • Basic understanding of blockchain concepts (wallets, transactions, smart contracts)
  • Interest in DeFi, NFTs, or crypto markets
  • Experience with blockchain analytics tools (e.g., Dune, Nansen, Glassnode) is a plus





Please mention the word **YAY** and tag RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3 when applying to show you read the job post completely (#RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
manager growth
Mission Statement The Platform team creates the technology that enables Spotify to learn quickly and scale easily, enabling rapid growth in our users and our business around the globe. Spanning many disciplines, we work to make the business work; creating the infrastructure, tooling, frameworks, and capabilities needed to welcome a billion customers. About the Team We are looking for a passionate Product Manager to join Spotify's Data Platform Studio. Data Platform's mission is to enable the application of data in an intuitive and efficient way—helping Spotify extract value from data at scale. Data Platform is responsible for how data is collected, processed, stored, governed, and made available to the thousands of engineers, data scientists, and analysts who build Spotify's products. With AI agents increasingly writing data pipelines and powering personalization, this is one of the most consequential infrastructure domains at Spotify.

Please mention the word **REVOLUTIONIZE** and tag RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3 when applying to show you read the job post completely (#RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$180000 - $220000 Full time
Data Scientist
  • Junction
  • Remote
python technical cloud management

Healthcare is in crisis and the people behind the results deserve better. With more and more data coming from wearables, lab tests, and patient–doctor interactions, we’re entering an era where data is abundant.

Junction is building the infrastructure layer for diagnostic healthcare, making patient data accessible, actionable, and automated across labs and devices. Our mission is simple but ambitious: use health data to unlock unprecedented insight into human health and disease.

If you're passionate about how technology can supercharge healthcare, you’ll fit right in.

Backed by Creandum, Point Nine, 20VC, YC, and leading angels, we’re working to solve one of the biggest challenges of our time: making healthcare personalized, proactive, and affordable. We’re already connecting millions and scaling fast.

Short on time? TL;DR

  • You: Can define what should be measured, how it should be modeled, and how those insights should shape product and company decisions.

  • Ownership: You’ll own Junction’s highest-leverage statistical, modeling, and evaluation work across diagnostics, clinical workflows, and AI-enabled product development.

  • Scope: This is not a pure IC modeling role and not a reporting role. You’ll set the methodology, research roadmap, and decision framework for how Junction uses data to drive product, clinical, and business outcomes.

  • Salary: $180,000 – $220,000 + equity

  • Location: Fully remote (EST timezone only)

Why we need you

Junction sits in the flow of high-value diagnostics and clinical data. As the company grows, our advantage moves beyond just having data to having the ability to turn it into reliable intelligence improving product decisions, customer outcomes, and the performance of the business.

Some of that work exists today, but it is not yet owned as a coherent function. Models get built. Analyses get done. Experiments answer local questions. But we need someone who can define the broader scientific and analytical system: what we should measure, what methods we trust, where modeling creates real leverage, and how that work translates into products and decisions that hold up outside a demo.

We’re hiring our first Data Scientist to take ownership of, and establish that standard.

This role will lead Junction’s most important modeling, experimentation, and evaluation work. You’ll partner closely with data, product engineering and leadership teams to drive the analytical roadmap by which Junction can leverage differentiated value from data.

What you’ll be doing day to day

  • Own the research and modeling work underlying Junction’s highest-priority data science opportunities across diagnostics, clinical workflows, and AI-enabled product features

  • Define rigorous frameworks for measurement, experimentation, and causal evaluation so we can distinguish signal from noise and make decisions we can defend

  • Lead development of predictive models, segmentation approaches, risk or routing logic, and other statistical systems that directly inform product and business strategy

  • Build the analytical foundation behind customer-facing features — from model development through to validation and performance tracking

  • Partner with engineering and data engineering to ensure models and analytical systems can be put in production, are reliable, and useful in real workflows

  • Establish how Junction evaluates data-driven and AI-enabled features, including methodology, quality thresholds, monitoring, and performance review

  • Communicate complex technical findings clearly to technical and non-technical stakeholders, including tradeoffs, limitations, and implications for action

Requirements

  • Strong track record of leading high-stakes analytical work that influenced product, operational, or business decisions

  • Deep foundation in statistical inference, experimental design, observational analysis, and model evaluation

  • Strong Python and/or R skills, with experience working on large, messy real-world datasets

  • Experience building predictive or decision-support models in production or near-production environments

  • Experience partnering closely with engineering to move work from analysis or prototype into deployed systems

  • Ability to operate at both strategic and hands-on levels: defining the roadmap while also getting into the details when needed

  • Strong communication and stakeholder management skills; able to explain methods, findings, and tradeoffs to executives as well as technical peers

  • Comfort operating in a startup environment with ambiguity, limited structure, and high ownership

Nice to have

  • Experience designing, executing, and publishing research studies

  • Experience with HIPAA, PHI, or other regulatory clinical frameworks

  • Deep familiarity with modern data tooling and production workflows across warehouses, orchestration, and transformation layers

  • Experience developing, deploying, and designing evaluation frameworks for LLM or AI-powered features in customer-facing products

  • Expertise directly working with healthcare, diagnostics, lab data, wearable data, and other clinical data

  • Experience applying causal inference methods, such as diff-in-diff, propensity scoring, or instrumental variables in practice

What this role isn’t

  • Not an analytics role focused on dashboards, reporting, or one-off analysis

  • Not an ML platform role — you won’t own infrastructure or tooling

  • Not a good fit if you mainly want to experiment with models or AI ideas without being accountable for how they perform in production

  • Not a good fit if you struggle with ambiguity. Knowing what to work on is part of the job

How you'll be compensated

  • Salary: $180,000 – $220,000 + equity

  • Your salary is dependent on your location and experience level

  • Generous early stage options (extended exercise post 2 years employment)

  • Regular in-person offsites, last were in Tenerife and Miami

  • Monthly learning budget of $300 for personal development and productivity

  • Flexible, remote-first working - including $1K for home office equipment

  • Monthly budget of $150 to use towards a coworking space

  • 25 days off a year + national holidays

  • Healthcare coverage depending on location

Oh and before we forget:

  • Backend Stack: Python (FastAPI), Go, PostgreSQL, Google Cloud Platform (Cloud Run, GKE, Cloud BigTable, etc), Temporal Cloud

  • Frontend Stack: TypeScript, Next.js

  • API docs are here: https://docs.junction.com/

  • Company handbook is here with engineering values + principles

Important details before applying:

  • We only hire folks physically based in GMT and EST timezones - more information here

  • We do not sponsor visas right now given our stage



Please mention the word **EQUITABLE** and tag RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3 when applying to show you read the job post completely (#RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior App & Frontend Developer AS233
  • Smart Working Solutions
  • Remote
frontend developer embedded architect

About Smart Working
At Smart Working, we believe your job should not only look right on paper but also feel right every day. This isn’t just another remote opportunity - it’s about finding where you truly belong, no matter where you are. From day one, you’re welcomed into a genuine community that values your growth and well-being.

Our mission is simple: to break down geographic barriers and connect skilled professionals with outstanding global teams and products for full-time, long-term roles. We help you discover meaningful work with teams that invest in your success, where you’re empowered to grow personally and professionally.

Join one of the highest-rated workplaces on Glassdoor and experience what it means to thrive in a truly remote-first world.

About the Role
This is a long-term, strategic role, not a short sprint. You'll be embedded in a collaborative engineering and analytics team, working across the full data lifecycle: ingestion, transformation, modelling, and surfacing insights through Looker. You'll work closely with stakeholders across commercial, product, and marketing to ensure data is reliable, scalable, and meaningful.

You'll be given real ownership. This is a role for someone who wants to shape standards, improve the architecture, and grow with a brand that takes its data seriously.

\n


Responsibilities
  • Design, build, and maintain robust ETL/ELT pipelines that move data from source systems into Google BigQuery, ensuring reliability, scalability, and observability at every stage.
  • Develop and enforce data models and schema standards using best-practice SQL and dimensional modelling principles, with a focus on clarity, reuse, and performance.
  • Own the Google BigQuery environment, optimising queries, managing costs, enforcing data governance, and ensuring the platform scales alongside the business.
  • Build and maintain Looker explores, LookML models, and dashboards that translate complex datasets into clear, actionable business intelligence for non-technical stakeholders.
  • Work across the full Google Cloud Platform stack, including Cloud Storage, Dataflow, Pub/Sub, Cloud Functions, and Composer, to architect end-to-end data solutions.
  • Partner with analytics, engineering, and commercial teams to understand data requirements and translate business problems into scalable technical solutions.
  • Champion data quality and testing frameworks, implementing monitoring and alerting so that issues are caught early and resolved quickly.
  • Contribute to documentation, coding standards, and architectural decision records so the team can move fast with confidence.
  • Mentor junior data team members and set the bar for engineering rigour across the data function.
  • Stay current with developments in the modern data stack and proactively recommend tooling or process improvements where appropriate.


Requirements
  • 5+ years of experience in SQL and data modelling, with strong command of dimensional modelling, star schemas, and performance optimisation.
  • 3+ years working with Google BigQuery in a production environment.
  • 3+ years hands-on experience with Google Cloud Platform (Cloud Storage, Dataflow, Pub/Sub, Cloud Functions, Composer).
  • 3+ years building and maintaining ETL/ELT pipelines at scale.
  • 1+ year working with Looker and LookML to deliver business-facing dashboards and data products.
  • Demonstrable experience leading at least one data project end-to-end, from scoping through to delivery.
  • Able to communicate clearly with non-technical stakeholders about data limitations, timelines, and trade-offs.
  • Comfortable making pragmatic architecture decisions in a cloud-native, modern data stack environment.


Nice to Have
  • Experience with dbt (Data Build Tool) for transformation layer management and testing.
  • Familiarity with orchestration tools such as Apache Airflow or Cloud Composer.
  • Python skills for pipeline scripting, data validation, or automation.
  • Background in retail, ecommerce, or fashion, understanding how data flows across commercial and digital channels.
  • Exposure to real-time or streaming data pipelines using Pub/Sub or Dataflow.
  • Experience with Terraform or Infrastructure-as-Code practices in a GCP context.
  • Familiarity with data governance frameworks, cataloguing, and lineage tracking.


Benefits
  • Fixed Shifts: 12:00 PM - 9:30 PM IST (Summer) | 1:00 PM - 10:30 PM IST (Winter)
  • No Weekend Work: Real work-life balance, not just words
  • Day 1 Benefits: Laptop and full medical insurance provided
  • Support That Matters:Mentorship, community, and forums where ideas are shared
  • True Belonging: A long-term career where your contributions are valued


\n

At Smart Working, you’ll never be just another remote hire.

Be a Smart Worker - valued, empowered, and part of a culture that celebrates integrity, excellence, and ambition.

If that sounds like your kind of place, we’d love to hear your story. 



Please mention the word **ENGAGING** and tag RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3 when applying to show you read the job post completely (#RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
$$$ Full time
Senior Data Engineer
  • Capnexus
  • Remote
amazon system software cloud
Capnexus is a comprehensive services provider. Our team consists of outstanding professionals, highly experienced in designing, building, and supporting retail software. We see ourselves as a build-as-a-service provider who follows a repeatable business pattern that can be applied to a variety of platforms and verticals. Having a culture built on outcomes and delivery at the core of the business, Capnexus is providing its customers with a complete suite of services for software development, system analysis, integration, implementation, and support, as well as the option to engage a single team to perform all the services they require. Who You Are and What You'll Do: Capnexus is looking for a highly skilled Senior AWS Data Engineer to lead data architecture, pipeline development, and ERP integration for a 12-week AI-powered modernization engagement in the construction industry. This role is focused on designing and implementing the data engineering backbone of an intelligent subcontractor pre-qualification platform, including CMIC ERP API integration, Amazon Textract data extraction pipelines, ETL development using AWS Glue, and data quality validation. This is an exciting opportunity to apply advanced cloud data engineering skills on a platform that leverages generative AI to automate and modernize enterprise workflows. Responsibilities:

Please mention the word **PICTURESQUE** and tag RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3 when applying to show you read the job post completely (#RMjYwMDo0MDQwOmFjZjg6YmYwMDo2MjQxOmRjZDQ6NzhiYjo3NDk3). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Gross salary $4500 - 7500 Full time
Python Artificial Intelligence Machine Learning Kubernetes
Niuro connects projects with elite tech teams, collaborating with leading U.S. companies. Our mission is to simplify global talent acquisition through innovative solutions that maximize efficiency and quality. The Head of AI will join Niuro’s remote-first environment to define and drive the AI strategy across the organization, partnering with the CEO to align technology with business goals. You will lead the design and deployment of scalable, secure AI platforms, modernizing legacy systems while delivering transformative AI capabilities for our clients. This role sits at the intersection of strategic leadership and hands-on technical execution, guiding cross-functional teams and ensuring that AI initiatives translate into measurable business outcomes. You will also help nurture a global, high-performance workforce through mentorship, training, and strong governance around AI programs.

This job is original from Get on Board.

Key Responsibilities

  • Vision & Strategy: Partner with the CEO to define and execute Niuro's AI roadmap, ensuring alignment with business objectives and market opportunities. Translate strategy into actionable programs with clear milestones and metrics.
  • Architecture Leadership: Serve as the chief AI architect, designing scalable, secure AI-driven systems. Lead the transition from legacy platforms to modern infrastructure while ensuring reliability and compliance.
  • Innovation & Delivery: Drive rapid development of new AI-powered features and services, balancing speed with maintainability and long-term support.
  • Technology Oversight: Guide the use of cloud-based technologies (AWS, Terraform, Kubernetes, Python, Windows Server/IIS, FastAPI). Implement monitoring (CloudWatch, Grafana) and data pipelines (AWS Glue/Lambda) to ensure scalability and observability.
  • People & Stakeholders: Communicate complex technical concepts clearly to executives, clients, and internal teams. Mentor senior engineers and foster a culture of scientific rigor and responsible AI.

Required Skills & Experience

8+ years in software engineering, data science, or AI with at least 3+ years in leadership. Proven track record deploying AI/ML solutions at scale in production environments. Strong systems design background and experience with cloud platforms (AWS preferred). Advanced Python programming skills; experience with modern AI frameworks and LLMs. Demonstrated success modernizing legacy platforms and delivering scalable, maintainable AI solutions. Exceptional executive-level communication abilities and a talent for translating technical concepts into business value. Fluent in English; Spanish or Portuguese is a plus.

Desirable Skills & Experience

Experience in regulated industries (fintech, govtech) and products with active users and customer support operations. Familiarity with AWS AI services, container orchestration (Kubernetes/ECS), and MLOps. Exposure to LLM-based automation and data engineering workflows. A proactive, entrepreneurial mindset with a bias for action and strong collaboration skills.

Benefits & Perks

We offer the chance to participate in impactful, technically rigorous industrial data projects that drive innovation and professional growth. Niuro supports a 100% remote work model, enabling global flexibility. We invest in career development through ongoing training and leadership opportunities, ensuring continuous growth. Upon successful completion of the initial contract, there is potential for long-term collaboration and stable, full-time employment. Joining Niuro means being part of a global community with strong administrative support that enables you to focus on impactful work.

Fully remote You can work from anywhere in the world.
$$$ Full time
Python NoSQL Machine Learning Cloud Computing

Grupo Mariposa es una corporación multinacional de bebidas y alimentos fundada en 1885, con operaciones en más de 14 países y más de 15,000 colaboradores. Contamos con el portafolio de bebidas más grande de la región y partnerships con líderes globales como PepsiCo y AB InBev. En los últimos años nos hemos expandido globalmente y reorganizado en cuatro unidades de negocio: apex (transformación), cbc (distribución), beliv (innovación en bebidas) y bia (alimentos). Buscamos talentos para potenciar nuestra estrategia de crecimiento y llevar alegría y desarrollo a lo largo de la organización. En este rol, tendrás la oportunidad de liderar la arquitectura de datos e IA, diseñando soluciones escalables que habiliten análisis a gran escala y la operacionalización de modelos de ML/IA en entornos de producción.

Apply directly from Get on Board.

Funciones y responsabilidades

  • Diseñar e implementar arquitecturas de Datos e IA híbridas y escalables (Lakehouse) para soportar ingestión masiva y cargas de trabajo de ML/IA.
  • Definir y estandarizar flujos de trabajo para Data Science, asegurando un ciclo de vida de modelos (MLOps) ágil y robusto.
  • Actuar como referente técnico en Databricks (Unity Catalog, Delta Lake, MLflow), garantizando buenas prácticas de código y rendimiento.
  • Modelar y optimizar esquemas NoSQL para aplicaciones de alto rendimiento y baja latencia.
  • Establecer políticas de gobierno de datos que cubran datasets tradicionales, feature stores y registros de modelos.
  • Colaborar estrechamente con Data Engineers y Data Scientists para cerrar la brecha entre prototipado y despliegue en producción.
  • Liderar iniciativas de arquitectura orientadas a IA, incluyendo patrones de inferencia batch vs. real-time y escalabilidad de modelos.

Descripción y requisitos

Buscamos un Arquitecto de Datos Senior con visión estratégica para liderar el diseño de nuestra plataforma de Datos.
Tu misión será construir cimientos que permitan no solo el análisis de datos a escala, sino también la operacionalización eficiente de modelos de Datos, Machine Learning y soluciones de IA. Debes dominar Databricks y demostrar experiencia llevando modelos a producción, gestionando arquitecturas que soporten analítica avanzada, Se valorará capacidad para definir estrategias de gobierno de datos, MLOps y colaboración transversal entre equipos.

Requerimientos:

  • Más de 5 años de experiencia en Ingeniería de Datos, Arquitectura de Datos o ML Engineering.
  • Dominio experto de Databricks (Unity Catalog, Delta Lake, MLflow).
  • Programación avanzada en PySpark y Python para ingeniería de datos.
  • Experiencia sólida en o integraciones con bases de datos (diseño de esquemas, optimización de queries y escalabilidad).
  • Conocimientos en entornos Cloud (Azure, AWS o GCP).

Deseables:

  • Experiencia con Feature Stores.
  • Conocimiento de herramientas de orquestación (Airflow, Prefect, etc.).
  • Experiencia en architecturas de IA y ML, con prácticas de MLOps (entrenamiento, versionado, despliegue y monitoreo).
  • Familiaridad con despliegue de LLMs o IA Generativa.
  • Certificaciones en Databricks (Data Engineer o ML Practitioner).

Perfil deseable

Se valoran certificaciones en Databricks y experiencia demostrable liderando proyectos de IA en entornos de producción. Capacidad para comunicar resultados técnicos a stakeholders no técnicos y para liderar equipos multidisciplinarios. Enfocado en resultados, con pensamiento analítico y enfoque práctico para resolver problemas complejos de datos a escala.

Beneficios

  • Trabajo remoto
  • Excelente ambiente para proponer e innovar tecnológicamente
  • Ambiente de trabajo colaborativo y dinámico
  • Desarrollo profesional y oportunidades de crecimiento
  • Flexibilidad de horarios y equilibrio entre vida laboral y personal

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Computer provided Grupo Mariposa provides a computer for your work.
Informal dress code No dress code is enforced.
Gross salary $3800 - 4000 Full time
Full-stack Automation Prompt Engineering API Integration

Nine-67 is building a fast-moving AI capability for enterprise clients. This role sits at the intersection of product, data, and execution, directly partnering with the CEO to design, build, and deploy AI-driven applications in real client environments. You will contribute to shaping a scalable, high-quality AI platform by delivering end-to-end solutions that combine frontend, backend, and data workflows in rapid iterations.

As a key player in a fast-build environment, you’ll help transform ambiguous business problems into working systems, create internal tools and automation, and integrate with client systems and data sources to drive real business value.

Job opportunity on getonbrd.com.

What You’ll Do

• Build and deploy AI-driven applications end-to-end (frontend, backend, data workflows) with speed and quality.
• Translate business problems into functioning AI systems with minimal direction.
• Collaborate directly with leadership and clients to iterate on real use cases.
• Develop internal tools, agents, and automation to boost efficiency.
• Integrate with APIs, data sources, CRM systems, data warehouses, and client environments.
• Continuously improve speed, reliability, and reusability of what we build.

What We’re Looking For

• Strong builder mindset—ship fast and learn by doing.
• Experience with AI tools and frameworks (LLMs, APIs, prompt systems, agents).
• Comfort across the stack; you don’t need to be perfect, but you can figure it out.
• Ability to work in ambiguity without waiting for detailed specs.
• Strong problem-solving and product intuition.
• High ownership and accountability.

Nice to Have

• Experience with Cursor, Vercel, Supabase, or similar modern stacks.
• Experience building internal tools or client-facing applications.
• Exposure to data pipelines, analytics, or CRM systems.
• Prior startup or consulting experience.

Why This Role

• Direct collaboration with leadership on high-impact projects.
• Build real systems used by enterprise clients.
• Opportunity to shape and scale AI capability from the ground up.

Fully remote You can work from anywhere in the world.
$$$ Full time
Machine Learning Engineer
  • NeuralWorks
  • Santiago (Hybrid)
Python SQL Docker Machine Learning

NeuralWorks es una compañía de alto crecimiento fundada hace 4 años. Estamos trabajando a toda máquina en cosas que darán que hablar.
Somos un equipo donde se unen la creatividad, curiosidad y la pasión por hacer las cosas bien. Nos arriesgamos a explorar fronteras donde otros no llegan: un modelo predictor basado en monte carlo, una red convolucional para detección de caras, un sensor de posición bluetooth, la recreación de un espacio acústico usando finite impulse response.
Estos son solo algunos de los desafíos, donde aprendemos, exploramos y nos complementamos como equipo para lograr cosas impensadas.
Trabajamos en proyectos propios y apoyamos a corporaciones en partnerships donde codo a codo combinamos conocimiento con creatividad, donde imaginamos, diseñamos y creamos productos digitales capaces de cautivar y crear impacto.

👉 Conoce más sobre nosotros

© getonbrd.com. All rights reserved.

Descripción del trabajo

El equipo de Data y Analytics trabaja en diferentes proyectos que combinan volúmenes de datos enormes e IA, como detectar y predecir fallas antes que ocurran, optimizar pricing, personalizar la experiencia del cliente, optimizar uso de combustible, detectar caras y objetos usando visión por computador.

Trabajarás transformando los procesos a MLOps y creando productos de datos a la medida basados en modelos analíticos, en su gran mayoría de Machine Learning, pero pudiendo usar un espectro de técnicas más amplio.

Dentro del equipo multidisciplinario con Data Scientist, Translators, DevOps, Data Architect, tu rol será extremadamente importante y clave para el desarrollo y ejecución de los productos, pues eres quien conecta la habilitación y operación de los ambientes con el mundo real. Te encargarás de aumentar la velocidad de entrega, mejorar la calidad y la seguridad del código, entender la estructura de los datos y optimizar los procesos para el equipo de desarrollo.

En cualquier proyecto que trabajes, esperamos que tengas un gran espíritu de colaboración, pasión por la innovación y el código y una mentalidad de automatización antes que procesos manuales.

Como MLE, tu trabajo consistirá en:

  • Trabajar directamente con el equipo de Data Scientists para poner en producción modelos de Machine Learning utilizando y creando pipelines de ML.
  • Recolección de grandes volúmenes y variados conjuntos de datos.
  • Recolección de interacción con la realidad para su posterior reentrenamiento.
  • Construir las piezas necesarias para servir nuestros modelos y ponerlos a interactuar con el resto de la compañía en un entorno real y altamente escalable.
  • Trabajar muy cerca de los Data Scientists buscando maneras eficientes de monitorear, operar y darle explicabilidad a los modelos.
  • Promover una cultura técnica impulsando los productos de datos con las prácticas DevSecOps, SRE y MLOps.

Calificaciones clave

  • Estudios de Ingeniería Civil en Computación o similar.
  • Experiencia práctica de al menos 3 años en entornos de trabajo como Software Engineer, ML Engineer, entre otros.
  • Experiencia con Python.
  • Entendimiento de estructuras de datos con habilidades analíticas relacionadas con el trabajo con conjuntos de datos no estructurados, conocimiento avanzado de SQL, incluida optimización de consultas.
  • Experiencia usando pipelines de CI/CD y Docker.
  • Pasión en problemáticas de procesamiento de datos.
  • Experiencia con servidores cloud (GCP, AWS o Azure, de preferencia GCP), especialmente el conjunto de servicios de procesamiento de datos.
  • Buen manejo de inglés, sobre todo en lectura donde debes ser capaz de leer un paper, artículos o documentación de forma constante.
  • Habilidades de comunicación y trabajo colaborativo.

¡En NeuralWorks nos importa la diversidad! Creemos firmemente en la creación de un ambiente laboral inclusivo, diverso y equitativo. Reconocemos y celebramos la diversidad en todas sus formas y estamos comprometidos a ofrecer igualdad de oportunidades para todos los candidatos.

“Los hombres postulan a un cargo cuando cumplen el 60% de las calificaciones, pero las mujeres sólo si cumplen el 100%.” Gaucher, D., Friesen, J., & Kay, A. C. (2011).

Te invitamos a postular aunque no cumplas con todos los requisitos.

Nice to have

  • Agilidad para visualizar posibles mejoras, problemas y soluciones en Arquitecturas.
  • Experiencia en Infrastructure as code, observabilidad y monitoreo.
  • Experiencia en la construcción y optimización de data pipelines, colas de mensajes y arquitecturas big data altamente escalables.
  • Experiencia en procesamiento distribuido utilizando servicios cloud.
  • Stack orientado a modelos econométricos (statsmodels, pyfixest), serialización.
  • Experiencia con algún motor de datos distribuido como pyspark, dask, modin.
  • Interés en temas "bleeding edge" de Inferencia Causal: (Técnicas Observacionales, Inferencia basada en diseño, Probabilidad y Estadística [Fuerte énfasis en OLS y sus distintas expansiones]).

Beneficios

  • MacBook Air M2 o similar (con opción de compra hiper conveniente)
  • Bono por desempeño
  • Bono de almuerzo mensual y almuerzo de equipo los viernes
  • Seguro Complementario de salud y dental
  • Horario flexible
  • Flexibilidad entre oficina y home office
  • Medio día libre el día de tu cumpleaños
  • Financiamiento de certificaciones
  • Inscripción en Coursera con plan de entrenamiento a medida
  • Estacionamiento de bicicletas
  • Programa de referidos
  • Salida de “teambuilding” mensual

Library Access to a library of physical books.
Accessible An infrastructure adequate for people with special mobility needs.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks NeuralWorks offers space for internal talks or presentations during working hours.
Life insurance NeuralWorks pays or copays life insurance for employees.
Meals provided NeuralWorks provides free lunch and/or other kinds of meals.
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Computer repairs NeuralWorks covers some computer repair expenses.
Dental insurance NeuralWorks pays or copays dental insurance for employees.
Computer provided NeuralWorks provides a computer for your work.
Education stipend NeuralWorks covers some educational expenses related to the position.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Recreational areas Space for games or sports.
Shopping discounts NeuralWorks provides some discounts or deals in certain stores.
Vacation over legal NeuralWorks gives you paid vacations over the legal minimum.
Beverages and snacks NeuralWorks offers beverages and snacks for free consumption.
Vacation on birthday Your birthday counts as an extra day of vacation.
Time for side projects NeuralWorks allows employees to work in side-projects during work hours.
Gross salary $4500 - 7500 Full time
Python Machine Learning Deep Learning AWS SageMaker
We are VARTEQ Inc., a remote-first IT outsourcing and product company with teams across the US, Europe, and LatAm. We build scalable, data- and AI-driven solutions for clients in fintech, edtech, and enterprise software. In this long-term, actively growing engagement, we support enterprise B2B clients across the US and Europe—including manufacturing, distribution, and high-tech—by building and integrating complex digital commerce solutions on platforms such as SAP, Salesforce, and Shopify. We are looking for a Machine Learning Engineer to help design, build, and optimize production-ready ML systems, with a specific focus on recommender systems for large-scale business impact.

Apply only from getonbrd.com.

Your Responsibilities:

We are technology consultancy teams working with enterprise B2B clients across the US and Europe. In this role, we focus on delivering production-ready machine learning capabilities.
  • Design, build, and optimize machine learning models for production use, with a focus on recommender systems.
  • Develop and maintain scalable ML pipelines, including data processing, training, evaluation, and deployment.
  • Work with large datasets to extract insights and improve model performance.
  • Collaborate with cross-functional teams to integrate ML solutions into production systems.
  • Continuously improve model performance through experimentation, tuning, and monitoring.
  • Ensure reliability and scalability of ML systems in cloud environments.

Qualifications:

We are looking for an experienced Machine Learning Engineer with strong production ML and MLOps experience.
  • 5+ years of hands-on experience in machine learning engineering.
  • Strong proficiency in Python and core ML frameworks such as PyTorch, TensorFlow, scikit-learn, and XGBoost.
  • Solid experience with deep learning, including model architecture, training, and optimization.
  • Proven experience designing and deploying recommender systems.
  • Hands-on experience with AWS SageMaker and the broader AWS ML ecosystem.
  • Practical experience building and maintaining data pipelines and ML workflows.
  • Experience working with production ML systems and MLOps practices.
We also value teammates who are proactive in monitoring model health, rigorous about reliability and scalability, and comfortable collaborating across engineering and product stakeholders to ensure ML solutions work end-to-end in production.

Desirable:

  • Experience with experimentation frameworks, A/B testing, and offline-to-online evaluation for recommender systems.
  • Familiarity with model monitoring approaches (e.g., drift detection, performance tracking) and incident response for ML in production.
  • Good understanding of data versioning and reproducible training workflows.
  • Experience integrating ML outputs into business-facing systems and improving user-facing recommendations over time.

What We Offer

  • 100% remote, async-friendly culture
  • Flexible working hours
  • Competitive compensation (contract-based)
  • Direct collaboration with US-based clients
  • English-speaking environment
  • Paid time off + public holidays
We also support an international team with clear processes, and we offer paid vacation, holidays, and sick leave.

Fully remote You can work from anywhere in the world.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Computer provided VARTEQ Inc. provides a computer for your work.
Gross salary $4000 - 6000 Full time
Python C# Docker CI/CD

Vequity is building the world’s most robust, contextualized buyer intelligence network for investment banks, private equity firms, and strategic acquirers — a platform with over 2.1 million buyer profiles, each containing ~100 structured and inferred data fields. Our proprietary AI agents continuously enrich, infer, and structure buyer intelligence at scale.

We need a fullstack engineer who ships product features end-to-end, brings real fluency with AI development tooling, and will take ownership of deployment pipelines that currently lack a dedicated owner.

This is a two-sided role: half building features that users see, half making the engineering team faster and more reliable. If you’ve actually built with Claude Code, Cursor, GitHub Copilot, or similar tools — not just experimented — and you can prove it with real output, we want to talk.

Apply exclusively at getonbrd.com.

What you’ll own

  • Fullstack product development. Build and ship features across the Angular frontend and C# / Python backend. Translate product requirements into production-ready code. Write clean, tested, maintainable code with solid PR practices.
  • AI-augmented development. Actively use AI coding tools (Claude Code, Cursor, GitHub Copilot, Windsurf, Aider) to accelerate your own development velocity. Improve team patterns and best practices for AI-assisted workflows. Evaluate and integrate new AI development tools as they emerge.
  • Deployment and operations. Own and improve CI/CD pipelines, deployment automation, and infrastructure-as-code. Build monitoring, alerting, and incident response capabilities. Manage cloud infrastructure (GCP) including cost optimization and scaling. Create and maintain runbooks for operational procedures.
  • Developer experience and tooling. Reduce friction in the development-to-deployment cycle. Improve local development environments, testing infrastructure, and developer workflows. Standardize build, lint, and test tooling across the codebase.
  • Cross-functional collaboration. Work across product, engineering, and sales operations teams. Bridge feature development and infrastructure reliability. Participate in code reviews and mentor team members.

What success looks like in year one

  • Shipping 2–3 features per sprint while maintaining code quality.
  • Continuous deployment implemented within your first month.
  • At least one improvement to the team’s AI-assisted workflow per week.
  • Deployment pipeline has a clear owner with documented runbooks and <15 minute rollback capability within 3 months.
  • Zero unplanned downtime from deployment issues within 6 months.
  • Your teammates are measurably faster because of the tooling and patterns you’ve introduced.

What we’re looking for

Core requirements

  • 4+ years fullstack development experience with Angular + Python or C# backends.
  • Demonstrated production use of AI coding tools (Claude Code, Cursor, GitHub Copilot) — must be able to show concrete examples of how these tools changed your workflow and output.
  • Experience with CI/CD pipelines, containerization (Docker), and cloud deployment (GCP preferred, AWS acceptable).
  • Solid understanding of DevOps practices: infrastructure-as-code, monitoring, logging, alerting.
  • Strong written English — this is a remote, async-heavy role with a US-based team.
  • Comfort working in a fast-paced startup where priorities shift and ownership is expected.

Nice to have

  • Github Actions experience is a big plus
  • Experience owning deployment pipelines end-to-end in a startup environment.
  • Terraform or Pulumi for infrastructure-as-code.
  • Kubernetes or Cloud Run experience on GCP.
  • Background in B2B SaaS or data-intensive platforms.
  • Experience with PostgreSQL and data-heavy applications.
  • Familiarity with the Python data tooling ecosystem (even if not a data engineer).
  • Contributions to open source or public examples of AI-augmented development work.

Compensation and benefits

We pay competitively for the LATAM market and we’re transparent about it.

  • Time off: Manage your own schedule. We trust you.
  • Health: $150/month health and wellness stipend.
  • Engagement: B2B contract. 30-day mutual notice.

How we work

  • Fully remote. We are based in Denver, Colorado (MT, UTC-7). You can work from Mexico, Colombia, Argentina, Brazil, Chile, or anywhere in the Americas with strong overlap.
  • Same time zone. We expect significant daily overlap with Central Standard Time (CT). LATAM time zones are ideal — this is a key reason we’re hiring in the region.
  • Async-first. We write things down. Docs, Loom videos, and thoughtful PR descriptions are the norm. Meetings happen when they’re the fastest path to clarity, not by default.
  • Small team, direct access. You will work directly with the Head of Engineering and the founder. No middle management. Your work ships fast.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage Vequity pays or copays health insurance for employees.
Computer provided Vequity provides a computer for your work.
$$$ Full time
Customer Success Project Management API Integration Account Management

OMNIX desarrolla una plataforma PaaS de automatización y orquestación de disrupciones en operaciones complejas, integrándose con sistemas core como ERP, WMS, CRM e IoT. Trabajamos con empresas enterprise en industrias como telecomunicaciones, retail, logística y manufactura, donde la continuidad operacional es crítica.
El Customer Success Manager se incorpora al equipo de Delivery & Customer Success, trabajando en estrecha colaboración con Forward Deployed Engineers (FDE), Ventas y Producto. Su rol es asegurar que las implementaciones generen impacto real y sostenido en el negocio del cliente. Es responsable de transformar proyectos en adopción profunda, expansión de uso y valor operativo tangible, contribuyendo directamente a la retención y crecimiento de cuentas estratégicas.

Apply directly on the original site at Get on Board.

Funciones del cargo

El Customer Success Manager es responsable de la gestión integral de cuentas enterprise post-implementación, asegurando que OMNIX se convierta en un sistema crítico dentro de la operación del cliente. Lidera la relación estratégica con stakeholders, define junto al cliente los casos de uso prioritarios y construye un roadmap de expansión basado en impacto operativo.
Trabaja coordinadamente con el FDE, quien ejecuta técnicamente las soluciones, mientras el CSM asegura su adopción, continuidad y valor en producción. Tiene autonomía para priorizar iniciativas, detectar oportunidades de expansión y escalar decisiones. Lidera instancias ejecutivas como QBRs y es responsable de sostener una narrativa clara de valor. El éxito del rol se mide por la profundidad de uso de la plataforma, la expansión de la cuenta y la capacidad de convertir soluciones en resultados concretos dentro de la operación.

Requerimientos del cargo

Experiencia mínima de 5 años en roles de Customer Success, consultoría o gestión de cuentas en contextos B2B enterprise.

Experiencia demostrable trabajando con clientes complejos en industrias como logística, telecomunicaciones, retail o manufactura.

Capacidad de interactuar con stakeholders técnicos y ejecutivos (C-level), sosteniendo conversaciones de negocio y tecnología.

Experiencia gestionando implementaciones o proyectos con múltiples integraciones (ERP, APIs, sistemas core).

Fuerte orientación a resultados, con capacidad de estructurar problemas, priorizar iniciativas y ejecutar con autonomía.

Inglés avanzado (oral y escrito) para interacción con equipos y clientes internacionales.

Alta disciplina operativa, capacidad de seguimiento y accountability bajo entornos exigentes.

Opcionales

Experiencia previa en empresas tipo SaaS/PaaS o plataformas de datos y automatización operacional.

Conocimiento en herramientas de integración, data workflows o automatización (ej: n8n, Zapier, APIs, ETL).

Experiencia en consultoría estratégica o implementación de transformación digital en grandes empresas.

Familiaridad con metodologías de gestión como EOS o frameworks de ejecución disciplinada.

Conocimiento en analítica de datos, detección de anomalías o modelos de inteligencia artificial aplicados a operaciones.

Experiencia en entornos de alto crecimiento o compañías tecnológicas con foco enterprise.

Condiciones

enefits of working at OMNIX
  • Be part of an agile, high-impact team where everyone contributes and makes a difference.
  • Mostly remote work, with flexibility and objective-based management.
  • Performance and company results bonuses.
  • Fast professional growth, with the possibility to expand roles and responsibilities.
  • We operate using the EOS (Entrepreneurial Operating System), which provides:
    • clarity of goals,
    • strong prioritization,
    • clear metrics,
    • a culture of accountability.
  • Opportunity to work with teams in Chile, Peru, Colombia, and the United States.
  • Participation in cutting-edge AI and automation projects with real impact on enterprises and governments.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage OMNIX AI Corp pays or copays health insurance for employees.
Informal dress code No dress code is enforced.
Vacation over legal OMNIX AI Corp gives you paid vacations over the legal minimum.
Gross salary $2200 - 2400 Full time
JavaScript Android iOS Git
Somos 3IT ¡Innovación y talento que marcan la diferencia!
Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.
Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.
Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

This job is published by getonbrd.com.

📝 ¿Cuál sería tu trabajo?

Desarrollar interfaces de usuario eficientes, asegurando la funcionalidad y la calidad visual del software en cumplimiento con los requisitos del proyecto.

🌟Herramientas

  • React Native
  • JavaScript
  • Typescript
  • Servicio Rest
  • Marco de trabajo ágil: Scrum
  • Versionamiento de código (Git)
  • Desarrollo nativo en iOS o Android
  • Herramientas de Suite Atlassian: Jira, TM4J, Bamboo
  • Experiencia en banca
  • Contar con al menos 4 años de experiencia trabajando con las tecnologías mencionadas
📍 ¿Dónde y cómo trabajarás?
  • Ubicación oficina: Santiago
  • Modalidad: Híbrida

✋ Algunas consideraciones antes de postular

  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a las oficinas de cliente.
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista.

✌️ Beneficios Tritianoa

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxxe + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Health coverage Banchile pays or copays health insurance for employees.
Computer provided Banchile provides a computer for your work.
Gross salary $8000 - 10000 Full time
Figma Jira Notion A/B Testing

About EVEN

EVEN is the leading direct-to-fan platform for artists and labels. We help artists sell music, merchandise, and exclusive content directly to their superfans, with every sale counting toward official chart reporting through Luminate.

Our platform powers pre-orders, digital storefronts, and direct-to-consumer commerce for artists including J. Cole, French Montana, Brent Faiyaz, LaRussell, and Mick Jenkins. We are partnered with Universal Music Group, UnitedMasters, Too Lost, Stem, Symphonic, Secretly Distribution, Virgin Music Group, and others across 3,000+ labels and distributors in over 110 countries.

We are a remote-first team of 35 people across the US and Latin America. Our engineering team of 16 is primarily based in LATAM and operates in three squads (Artist, Fan, Core), shipping across web, mobile, and API. You will be working alongside engineers you can communicate with natively.

Job source: getonbrd.com.

Why This Role Exists

Product direction at EVEN is currently shared between our CEO (vision, strategy, partner commitments) and our CTO (day-to-day product and engineering decisions). Our Lead Product Designer shapes UX and design. There is no dedicated product manager.

We are now 35 people with three engineering squads, partnerships with the leading music companies, and a product surface that spans artist dashboards, fan storefronts, mobile apps, e-commerce, streaming, chart reporting, and API integrations.

We need someone whose full-time job is to own the product roadmap, run shaping sessions, write clear briefs, coordinate cross-team priorities, and connect what our partners and artists need with what our engineering team builds.

What you will do:

  • Own the product roadmap end to end. Translate company strategy into quarterly priorities, and quarterly priorities into engineering-ready specs.
  • Run shaping sessions with the CTO and engineering leads. Turn raw ideas into scoped briefs with clear acceptance criteria before they hit a sprint.
  • Manage the product process: Ideas Pool to PRD Library to Roadmap (we use Notion, Linear, and Figma).
  • Work directly with our 3 product designers to define user flows, review designs, and ship features that match the brief.
  • Coordinate across squads (Artist, Fan, Core) to manage dependencies, unblock engineers, and keep the roadmap on track.
  • Partner with BD and Artist Relations to understand what artists, labels, and distributors need and translate that into product requirements.
  • Define product metrics, track them in PostHog, and use data to prioritize what ships next.
  • Report to the CEO. Work side by side with the CTO.

Success at 30/60/90 days:

  • 30 days: You have audited the current roadmap, met every team lead, and identified the top 3 product gaps.
  • 60 days: You own the shaping process. Every feature entering a sprint has a brief you wrote or approved.
  • 90 days: The CEO is no longer involved in day-to-day product decisions. The roadmap is yours.

Qualifications and requirements

  • 5+ years in product management at a B2C or marketplace company, with at least 2 years as a lead or senior PM.
  • You have shipped and scaled digital commerce, content, or creator-economy products. Experience with platforms that have both a supply side (artists, creators) and a demand side (fans, consumers) is strongly preferred.
  • You write clear PRDs and briefs. You can take a vague idea and turn it into a scoped spec with acceptance criteria that engineers can build from.
  • You have run or closely participated in product shaping sessions with engineering and design teams.
  • You have managed or closely collaborated with product designers. You can give useful design feedback and know the difference between UX and UI polish.
  • You are fluent in English and Spanish, written and verbal. Our engineering and design teams work primarily in Spanish. Our commercial team works in English. You need both.
  • You are comfortable working across US and LATAM time zones with a fully distributed team.
  • You have used tools like Linear, Notion, Figma, and PostHog (or equivalents like Jira, Confluence, Amplitude, Mixpanel).
  • You understand analytics-driven product development. You can define metrics, set up tracking, and use data to make prioritization calls.
  • You have worked at a startup (Series A to Series C) where process was still being built and you had to build it yourself.

Desirable skills

  • Experience with direct-to-consumer e-commerce platforms or digital storefronts.
  • Background in the music industry, artist services, or label partnerships.
  • Familiarity with Luminate/SoundScan chart reporting or music distribution workflows.
  • Experience working with React, Next.js, or modern web/mobile stacks. You will not code, but technical fluency helps you scope better and earn engineering trust faster.
  • Prior experience at a Series A or Series B startup where you built the product function from scratch (first PM hire).
  • Experience managing a product team of 3+ people (designers and/or PMs).
  • Experience with mobile product development (iOS/Android).

Conditions

  • Fully remote. Work from anywhere in the Americas.
  • Equity Package
  • Core overlap hours: 10am to 3pm EST (New York time). The rest of your day is flexible.
  • Paid in USD via Deel.
  • Health stipend included in monthly compensation.
  • Flexible vacation and PTO policy.
  • Paid sick days.
  • Equipment provided.
  • Direct access to the CEO and CTO. No layers between you and the people making decisions.
  • You will be the first dedicated product hire. You are building the function, not joining one.

Relocation offered If you are moving in from another country, EVEN helps you with your relocation.
Fully remote You can work from anywhere in the world.
Pet-friendly Pets are welcome at the premises.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Health coverage EVEN pays or copays health insurance for employees.
Computer provided EVEN provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal EVEN gives you paid vacations over the legal minimum.
Gross salary $2000 - 2200 Full time
Product Owner TI
  • 3IT
  • Santiago (Hybrid)
Agile Scrum Jira OKR

Somos 3IT ¡Innovación y talento que marcan la diferencia!

Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.

Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.

Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

Apply to this job at getonbrd.com.

📝 ¿Cuál sería tu trabajo?

Maximizar el valor del producto definiendo la visión, priorizando el backlog y asegurando que el equipo entregue soluciones alineadas con las necesidades del negocio, los usuarios y los objetivos estratégicos.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Uso de Jira
  • Práctica en marcos ágiles y Scrum
  • Gestión de stakeholders del producto
  • Experiencia en sector financiero o banca
  • Conocimientos en OKR y KPI del producto
  • Trayectoria en equipos ágiles de 2 a 3 años
  • Dominio funcional de desarrollo, UX y tecnología
  • Manejo de historias de usuario bajo criterios INVEST y 3C
  • Capacidad para validar incrementos por sprint y maximizar valor
  • Competencia en explicación de historias de usuario e incorporación de feedback
  • Liderazgo en backlog de producto, refinamiento, priorización y User Story Mapping

📍 ¿Dónde y cómo trabajarás?

  • Ubicación oficina: Santiago
  • Modalidad: Hibrida

✋ Algunas consideraciones antes de postular:

  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a nuestra oficina
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Sodexo + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program Banco de Chile offers or subsidies mental and/or physical health activities.
Life insurance Banco de Chile pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage Banco de Chile pays or copays health insurance for employees.
Dental insurance Banco de Chile pays or copays dental insurance for employees.
Computer provided Banco de Chile provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Beverages and snacks Banco de Chile offers beverages and snacks for free consumption.
Parental leave over legal Banco de Chile offers paid parental leave over the legal minimum.
Gross salary $2200 - 2800 Full time
Agile Project Management Budgeting Financial Services

Somos 3IT ¡Innovación y talento que marcan la diferencia!

Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.

Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.

Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

Originally published on getonbrd.com.

📝 ¿Cuál sería tu trabajo?

Definir, estandarizar, establecer y llevar a cabo la planificación estratégica y los procesos operativos de los proyectos de Desarrollo. Además, se encarga de monitorear las actividades, asignar tareas, recursos y presupuesto a los proyectos.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Orientación a resultados
  • Uso de metodologías PMI y Agile
  • Dominio en proyectos normativos y regulatorios
  • Capacidad para reportar avances a alta gerencia
  • Experiencia senior en gestión de proyectos de desarrollo
  • Trayectoria en banca, servicios financieros o industrias similares
  • Habilidades en coordinación transversal con múltiples stakeholders
  • Implementación y mantenimiento de marcos de gestión como CMMI
  • Competencia en planificación estratégica, asignación de recursos y presupuesto

📍 ¿Dónde y cómo trabajarás?

  • Ubicación oficina: Santiago
  • Modalidad: Hibrida

✋ Algunas consideraciones antes de postular:

  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a nuestra oficina
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxee + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program Banco de Chile offers or subsidies mental and/or physical health activities.
Life insurance Banco de Chile pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage Banco de Chile pays or copays health insurance for employees.
Dental insurance Banco de Chile pays or copays dental insurance for employees.
Computer provided Banco de Chile provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Beverages and snacks Banco de Chile offers beverages and snacks for free consumption.
Parental leave over legal Banco de Chile offers paid parental leave over the legal minimum.
Gross salary $2100 - 2300 Full time
Business Development Sales Forecasting Contract Management Negotiation

Somos 3IT ¡Innovación y talento que marcan la diferencia!

Para nosotros, la innovación es un proceso colaborativo y el crecimiento una meta compartida. Nos guiamos por valores como el trabajo en equipo, la confiabilidad, la empatía, el compromiso, la honestidad y la calidad, porque sabemos que los buenos resultados parten de buenas relaciones.

Además, valoramos la diversidad y promovemos espacios de trabajo inclusivos. Por eso nos sumamos activamente al cumplimiento de la Ley 21.015, asegurando procesos accesibles y con igualdad de oportunidades.

Si estás buscando un lugar donde seguir aprendiendo, aportar con lo que sabes y crecer en un ambiente cercano y colaborativo, esta puede ser tu próxima oportunidad.

Apply exclusively at getonbrd.com.

📝 ¿Cuál sería tu trabajo?

Impulsar el crecimiento estratégico de la compañía mediante la generación de nuevas oportunidades de negocio, apertura de nuevos mercados, cumpliendo una meta comercial compartida entre las líneas de Outsourcing TI y Soluciones TI, asegurando ingresos sostenibles y rentables para la organización.

🎯 ¿Qué necesitamos para sumarte a nuestro equipo?

  • Manejo de pipeline, forecast y uso de CRM HubSpot
  • Trayectoria mínima de 6 años en roles comerciales estratégicos
  • Experiencia en desarrollo de negocio (hunting y apertura de mercado)
  • Habilidad en negociación y cierre de contratos con foco en rentabilidad
  • Competencia en coordinación con áreas internas (TI, PMO, Comercial)
  • Capacidad para la construcción de propuestas comerciales (Scope of Work)
  • Expertise en venta consultiva de servicios tecnológicos, outsourcing y soluciones TI

📍 ¿Dónde y cómo trabajarás?

  • Ubicación oficina: Providencia
  • Modalidad: Hibrida

✋ Algunas consideraciones antes de postular:

  • Debes tener disponibilidad para trabajar en modalidad híbrida y asistir de forma presencial a nuestra oficina
  • Si estás en situación de discapacidad, cuéntanos si necesitas algún requerimiento especial para tu entrevista

Beneficios que tendrás si te unes a nuestro team:

💰 Bono anual
🦷 Seguro dental
📚 Capacitaciones
📅 Días administrativos
🍽️ Tarjeta Pluxee + $80.000
👕 Código de vestimenta informal
🚀 Programas de upskilling y reskilling
🏥 Seguro complementario de salud MetLife
💊 Descuentos en farmacias y centros de salud
🐾 Descuento en seguros y tiendas de mascotas
🎄 Aguinaldo en Fiestas Patrias y Navidad
👶 Días adicionales al postnatal masculino
🎂 Medio día libre por tu cumpleaños
🏦 Caja de Compensación Los Andes
🌍 Descuento Mundo ACHS
🎁 Regalo por nacimiento
🛍️ Descuentos Buk

Wellness program 3IT offers or subsidies mental and/or physical health activities.
Life insurance 3IT pays or copays life insurance for employees.
Digital library Access to digital books or subscriptions.
Health coverage 3IT pays or copays health insurance for employees.
Dental insurance 3IT pays or copays dental insurance for employees.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Beverages and snacks 3IT offers beverages and snacks for free consumption.
Parental leave over legal 3IT offers paid parental leave over the legal minimum.
Gross salary $3100 - 4500 Full time
Data Engineer
  • Haystack News
  • Lima (Hybrid)
Python SQL Big Data Data Warehouse

Haystack News is the leading local & world news service on Connected TVs reaching millions of users! This is a unique opportunity to work at Haystack News, one of the fastest-growing TV startups in the world. We are already preloaded on 37% of all TVs shipped in the US!

Be part of a Silicon Valley startup and work directly with the founding team. Jumpstart your career by working with Stanford & Carnegie Mellon alumni and faculty who have already been part of other successful startups in Silicon Valley.

You should join us if you're hungry to learn how Silicon Valley startups thrive, you like to ship quickly and often, love to solve challenging problems, and like working in small teams.

See Haystack's feature at this year's Google IO:

Official source: getonbrd.com.

Job functions

  • Analyze large data sets to get insights using statistical analysis tools and techniques
  • Collaborate with the Marketing, Editorial and Engineering teams on dataset building, querying and dashboard implementations
  • Support the data tooling improvement efforts and help increase the company data literacy
  • Work with the ML team on feature engineering and A/B testing for model building and improvement
  • Design, test and build highly scalable data management and monitoring systems
  • Build high-performance algorithms, prototypes and predictive models

Qualifications and requirements

  • Strong written and spoken English is a must!
  • Bachelor's degree in Computer Science, Statistics, Math, Economics or related field
  • 2+ years experience doing analytics in a professional setting
  • Advanced SQL skills, including performance troubleshooting
  • Experience with data warehouses (e.g. Snowflake, BigQuery, Redshift)
  • Proficient in Python including familiarity with Jupyter notebooks
  • Strong Math/Stats background with statistical analysis experience on big data sets
  • Strong communication skills, be able to communicate complex concepts effectively.

Conditions

  • Unlimited vacations :)
  • Travel to team's offsite events
  • 100% paid Uber rides to go to the office
  • Learn about multiple technologies

Accessible An infrastructure adequate for people with special mobility needs.
Relocation offered If you are moving in from another country, Haystack News helps you with your relocation.
Pet-friendly Pets are welcome at the premises.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Meals provided Haystack News provides free lunch and/or other kinds of meals.
Paid sick days Sick leave is compensated (limits might apply).
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Company retreats Team-building activities outside the premises.
Computer repairs Haystack News covers some computer repair expenses.
Commuting stipend Haystack News offers a stipend to cover some commuting costs.
Computer provided Haystack News provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Recreational areas Space for games or sports.
Gross salary $2500 - 3000 Full time
Data Engineer (Bilingüe)
  • Adecco
  • Santiago (Hybrid)
Java Python SQL Docker

Somos Adecco Chile, la filial local del líder mundial en servicios de Recursos Humanos, con más de 35 años de presencia en el país y una sólida trayectoria apoyando a empresas en su gestión de talento. Adecco Chile está comprometida con ofrecer soluciones integrales y personalizadas, destacándose en áreas como Selección de Personal, Staffing, Payroll Services y Training & Consulting. Nuestro equipo trabaja con altos estándares de calidad, respaldados por la certificación ISO 9001:2015, y con presencia en las principales ciudades del país. Actualmente, buscamos incorporar un Data Engineer para un proyecto estratégico de un cliente que involucra la construcción y optimización de pipelines de datos en cloud, con especial foco en tecnologías Google Cloud Platform y arquitecturas modernas de procesamiento y orquestación.

Apply directly from Get on Board.

Responsabilidades y Funciones Principales

En esta posición, el Data Engineer tendrá como objetivo principal diseñar, implementar y mantener pipelines de datos robustos y escalables para soportar las necesidades de inteligencia de negocio y análisis avanzado. Trabajará estrechamente con equipos de Data Science, BI y desarrollo para asegurar que los flujos de datos estén optimizados y disponibles para los diferentes consumidores.
  • Diseñar y desarrollar pipelines de ingesta, procesamiento y distribución de datos en la nube, utilizando tecnologías de Google Cloud Platform y frameworks open source.
  • Gestionar entornos de desarrollo para asegurar la reproducibilidad y escalabilidad con herramientas como venv, pip y poetry.
  • Implementar orquestadores de workflows como Cloud Composer (Airflow) y plataformas de AI pipelines para automatizar procesos de data engineering.
  • Optimizar el rendimiento de los clusters y pipelines de datos, tanto batch como streaming, aplicando conocimientos avanzados de Apache Spark, Apache Beam o Apache Flink.
  • Aplicar técnicas de feature engineering y gestión avanzada de datos para maximizar el valor analítico.
  • Administrar almacenamiento y bases de datos en GCP, como CloudSQL, BigQuery, Cloud Bigtable, Cloud Spanner y bases de datos vectoriales.
  • Coordinar la integración de microservicios y mensajería en tiempo real mediante Pub/Sub, Kafka y Kubernetes Engine.
  • Asegurar que los procesos CI/CD para pipelines de datos estén correctamente implementados con herramientas como GitHub, Jenkins, GitLab y Terraform.
  • Participar en el diseño y escalabilidad de arquitecturas distribuidas, garantizando la resiliencia y optimización del uso de recursos cloud.

Requisitos y Competencias

Buscamos profesionales con conocimientos sólidos y experiencia comprobable en el área de ingeniería de datos, con capacidad para trabajar en entornos dinámicos y multidisciplinarios. Es fundamental tener un dominio avanzado de la programación, experiencia práctica en la nube, y un amplio entendimiento de las arquitecturas modernas de datos.
  • Dominio del inglés, tanto escrito como verbal, para comunicación efectiva dentro de equipos y documentación técnica.
  • Experiencia avanzada en lenguajes de programación Python y Java, aplicados en el desarrollo y mantenimiento de pipelines de datos.
  • Experiencia práctica en entornos cloud, preferentemente Google Cloud Platform (GCP), utilizando servicios como CloudSQL, BigQuery, Cloud Storage, Pub/Sub, Cloud Functions y Kubernetes Engine.
  • Conocimiento profundo en manejo de contenedores Docker y gestión de entornos virtuales con herramientas como venv, pip y poetry.
  • Amplia experiencia en orquestación de workflows con Airflow, Vertex AI pipelines u otros orquestadores equivalentes.
  • Competencia en técnicas de ingeniería de datos, feature engineering, y frameworks de procesamiento distribuido en Batch y Streaming como Apache Spark, Apache Beam o Apache Flink.
  • Dominio avanzado de SQL y conceptos de streaming (windowing, triggers, late arrival) para estructurar y manipular datos en tiempo real.
  • Experiencia en integración continua y despliegue continuo (CI/CD) con herramientas como GitHub, Jenkins, GitLab, y conocimientos en infraestructura como código usando Terraform.
  • Capacidad para diseñar arquitecturas de datos distribuidas y optimizadas, con comprensión de criterios para selección de opciones de almacenamiento y cómputo.
  • Habilidades analíticas y mentalidad de negocio para interpretar el uso de los datos en procesos de Business Intelligence y Analítica avanzada.

Competencias Deseables

  • Experiencia práctica en sistemas distribuidos, escalables y resilientes.
  • Experiencia laboral en diseño y arquitectura de soluciones de datos end-to-end que incluyan transacciones y múltiples fuentes basadas en APIs.
  • Buen entendimiento de estrategias para la optimización de rendimiento en clusters y pipelines de datos.
  • Exposición a tecnologías GCP para pipelines de datos de extremo a extremo.
  • Experiencia con Kubernetes para orquestación y administración de contenedores a gran escala.
  • Experiencia con bases de datos vectoriales, en particular Qdrant, para casos avanzados de búsqueda y análisis.

¿Qué ofrecemos?

- Un ambiente de trabajo desafiante y dinámico que fomenta tu desarrollo profesional.
- Oportunidad de formar parte de un equipo altamente cualificado y profesional en nuestro cliente
- Formación continua para mantenerte actualizado en las tecnologías más modernas.
- Oportunidades claras de crecimiento dentro de la empresa y el sector tecnológico.
- Contrato inicialmente a plazo fijo, con posibilidad de pasar a indefinido con el cliente final.
- Modalidad híbrida de trabajo: 1 días presencial en oficina y 4 días remoto.

Gross salary $4850 - 7000 Full time
ETL SQL Server Database Migration Performance Tuning

Krunchbox is transforming retail analytics with our next-generation platform (Krunchbox 2.0). We are migrating from 800 hardcoded ETLs to a modern, real-time analytics architecture powered by ClickHouse. This greenfield initiative aims to architect the analytical backbone for 100+ enterprise clients while maintaining and optimizing our existing SQL Server infrastructure during the transition. The Senior Database Engineer/Architect will lead the database transformation and operations across both legacy and modern systems, owning the analytical data layer, and delivering a scalable, multi-tenant ClickHouse architecture alongside ongoing SQL Server maintenance.

Apply to this job without intermediaries on Get on Board.

Key Responsibilities

  • ClickHouse Implementation (New Architecture): take over from initial consulting work to build production ClickHouse system; design and implement multi-tenant architecture with partition isolation; migrate from SQL Server to ClickHouse optimizing for columnar storage; create materialized views for KPI calculations and dashboards; implement data retention policies and TTL strategies; optimize query performance for complex analytical workloads; design backup, recovery, and high-availability strategies.
  • SQL Server Management (Legacy Systems): maintain and optimize existing SQL Server infrastructure during transition; lead ETL modernization from 800 hardcoded processes to automated pipelines; perform database performance tuning and troubleshooting; manage legacy architecture decisions and technical debt; ensure data integrity and availability during migration; optimize existing queries and stored procedures.
  • Leadership & Architecture: provide database leadership across legacy and modern platforms; mentor on SQL and ClickHouse best practices; design migration strategies minimizing downtime and risk; collaborate with development teams on data architecture decisions; create documentation and best practices for a hybrid database environment; lead knowledge transfer and retention efforts during transition.
  • Collaboration & Delivery: work closely with existing team members to ensure continuity and minimize disruption; partner with Event Hubs team for real-time data ingestion; contribute to roadmap and architecture decisions that support scalable analytics for 100M+ rows daily across tenants.

What you’ll bring

Required Qualifications

  • ClickHouse Expertise: 3+ years of production ClickHouse experience; strong background with large-scale migrations from relational to columnar databases; deep understanding of ClickHouse internals (MergeTree engines, partitioning, sharding); expertise in materialized views, projections, and aggregation optimizations; experience with ClickHouse Cloud or managing ClickHouse clusters.
  • SQL Server & Traditional DB Skills: 5+ years of SQL Server production experience in enterprise environments; expert-level SQL programming and query optimization; experience with ETL pipeline design and optimization; database administration including backup, recovery, and performance tuning; experience leading migrations and modernization projects.
  • General Qualifications: proficiency in SQL and ClickHouse-specific extensions; experience with multi-tenant data architectures; strong problem-solving capabilities for complex data challenges; excellent cross-team communication.

Preferred Qualifications

  • Experience with Azure Event Hubs or Kafka integration; knowledge of CDC patterns and real-time data synchronization; retail or e-commerce analytics domain experience; experience with time-series data and IoT workloads; contributions to ClickHouse community or open source; experience with Azure SQL Database and cloud migrations; familiarity with Python/FastAPI for database integration; knowledge of data warehousing and OLAP concepts.

Desirable but not required

Desirable but not required skills:

  • Hands-on experience in modern data platforms and cloud-native architectures; prior leadership role in data platform transformations; experience with multi-region deployments and governance; ability to translate business requirements into scalable data models; passion for scalable performance optimization and data quality.

Benefits

  • Competitive compensation package.
  • Local health coverage (if required)
  • Opportunity to scale and lead a global SaaS platform that solves real-world customer challenges.
  • A direct, impactful role in shaping the future of AI-powered supplier-retailer collaboration.

Fully remote You can work from anywhere in the world.
Health coverage Krunchbox pays or copays health insurance for employees.
Computer provided Krunchbox provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Krunchbox gives you paid vacations over the legal minimum.
Gross salary $1500 - 2000 Full time
Data Scientist
  • Artefact LatAm
Python Git Data Analysis SQL

Somos Artefact, una consultora líder a nivel mundial en crear valor a través del uso de datos y las tecnologías de IA. Buscamos transformar los datos en impacto comercial en toda la cadena de valor de las organizaciones, trabajando con clientes de diversos tamaños, rubros y países. Nos enorgullese decir que estamos disfrutando de un crecimiento importante en la región, y es por eso que queremos que te sumes a nuestro equipo de profesionales altamente capacitados, a modo de abordar problemas complejos para nuestros clientes.

Nuestra cultura se caracteriza por un alto grado de colaboración, con un ambiente de aprendizaje constante, donde creemos que la innovación y las soluciones vienen de cada integrante del equipo. Esto nos impulsa a la acción, y generar entregables de alta calidad y escalabilidad.

Official source: getonbrd.com.

Tus responsabilidades serán:

  • Recolectar, limpiar y organizar grandes volúmenes de datos provenientes de diversas fuentes, como bases de datos, archivos planos, APIs, entre otros. Aplicando técnicas de análisis exploratorio para identificar patrones y resumir las características principales de los datos, así como para entender el problema del cliente.
  • Desarrollar modelos predictivos utilizando técnicas avanzadas de aprendizaje automático y estadística para predecir tendencias, identificar patrones y realizar pronósticos precisos
  • Optimizar algoritmos y modelos existentes para mejorar la precisión, eficiencia y escalabilidad, ajustando parámetros y explorando nuevas técnicas
  • Crear visualizaciones claras y significativas para comunicar los hallazgos y resultados al cliente, de manera efectiva
  • Comunicar los resultados efectivamente, contando una historia para facilitar la comprensión de los hallazgos y la toma de decisiones por parte del cliente
  • Diseñar y desarrollar herramientas analíticas personalizadas y sistemas de soporte para la toma de decisiones basadas en datos, utilizando lenguaje de programación como Python, R o SQL
  • Trabajo colaborativo: colaborar con equipos multidisciplinarios para abordar problemas complejos y proporcionar soluciones integrales al cliente, además de participar en proyectos de diversa complejidad, asegurando la calidad de los entregables y el cumplimiento de los plazos establecidos.
  • Investigar y mantenerse actualizado en análisis de datos, inteligencia artificial y metodologías para mejorar las capacidades analíticas, adquiriendo rápidamente conocimientos sobre diversas industrias y herramientas específicas

Los requisitos del cargo son:

  • Conocimientos demostrables en analítica avanzada, sean por estudios o experiencia laboral.
  • Manejo de Python, SQL y Git.
  • Conocimiento de bases de datos relacionales
  • Conocimiento de:
    • Procesamiento de datos (ETL)
    • Machine Learning
    • Feature engineering, reducción de dimensiones
    • Estadística y analítica avanzada

Algunos deseables no excluyentes:

Experiencia con:

  • Herramientas de BI (Power BI o Tableau)
  • Servicios cloud (Azure, AWS, GCP)
  • Conocimiento de bases de datos no relacionales (ej: Mongo DB)
  • Conocimiento en optimización

Algunos de nuestros beneficios:

  • Presupuesto de 500 USD al año para capacitaciones, sean cursos, membresías, eventos u otros.
  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Hasta 11 días de vacaciones adicionales a lo legal. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por utilidades de la empresa, además de bonos por trabajador referido y por cliente.
  • Medio día libre de cumpleaños, además de un regalito.
  • Almuerzos quincenales pagados con el equipo en nuestros hubs (Santiago, Bogotá, Lima y Ciudad de Mexico).
  • Flexibilidad horaria y trabajo por objetivos.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Bogotá, Lima y Ciudad de Mexico).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

...y más!

Fully remote You can work from anywhere in the world.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Meals provided Artefact LatAm provides free lunch and/or other kinds of meals.
Paid sick days Sick leave is compensated (limits might apply).
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Company retreats Team-building activities outside the premises.
Computer repairs Artefact LatAm covers some computer repair expenses.
Computer provided Artefact LatAm provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Informal dress code No dress code is enforced.
Vacation over legal Artefact LatAm gives you paid vacations over the legal minimum.
Beverages and snacks Artefact LatAm offers beverages and snacks for free consumption.
Parental leave over legal Artefact LatAm offers paid parental leave over the legal minimum.
Gross salary $1500 - 2000 Full time
Analytics Engineer
  • Artefact LatAm
SQL Business Intelligence ETL Power BI

Somos Artefact, una consultora líder a nivel mundial en crear valor a través del uso de datos y las tecnologías de IA. Buscamos transformar los datos en impacto comercial en toda la cadena de valor de las organizaciones, trabajando con clientes de diversos tamaños, rubros y países. Nos enorgullese decir que estamos disfrutando de un crecimiento importante en la región, y es por eso que queremos que te sumes a nuestro equipo de profesionales altamente capacitados, a modo de abordar problemas complejos para nuestros clientes.

Nuestra cultura se caracteriza por un alto grado de colaboración, con un ambiente de aprendizaje constante, donde creemos que la innovación y las soluciones vienen de cada integrante del equipo. Esto nos impulsa a la acción, y generar entregables de alta calidad y escalabilidad.

Send CV through Get on Board.

Tus responsabilidades serán:

  • Recolectar y analizar datos de diversas fuentes para identificar patrones y tendencias. Extraer insights significativos para comprender el rendimiento presente y futuro del negocio.
  • Crear modelos de datos adaptados a distintos proyectos y sectores.
  • Crear y optimizar informes, paneles y cuadros de mando para una presentación efectiva de la información, utilizando herramientas de BI como Tableau, Power BI o QlikView.
  • Identificar oportunidades de mejora de procesos mediante análisis de datos.
  • Mantener y actualizar bases de datos para garantizar su integridad y calidad.
  • Brindar formación y soporte al equipo en el uso de herramientas de Business Intelligence. Colaborar con equipos diversos para crear soluciones integrales. Entender las necesidades del cliente y proponer mejoras y soluciones proactivas.
  • Monitorear modelos de data science y machine learning. Mantener la calidad de datos en flujos de información. Gestionar la seguridad y la escalabilidad de entornos cloud de BI.

Los requisitos del cargo son:

  • Formación en Ingeniería Civil Industrial/Matemática/Computación, o carrera afín
  • 1 a 2 años de experiencia laboral en:
    • Proyectos BI
    • Herramientas de visualización como PowerBI, Tableau, QikView u otros
    • Soluciones de BI en entornos cloud (ejemplo, Azure y Power BI Service)
    • Fuentes de datos (SQL Server, MySQL, API, Data Lake, etc.)
    • Desarrollo de queries en SQL
    • Desarrollo de modelos de datos para usos analíticos y programación de ETLs
  • Inglés profesional

Algunos deseables no excluyentes:

  • Conocimiento de Python o R
  • Manejo de Big Data en miras de establecer reportería

Algunos de nuestros beneficios:

  • Presupuesto de 500 USD al año para capacitaciones, sean cursos, membresías, eventos u otros.
  • Rápido crecimiento profesional: Un plan de mentoring para formación y avance de carrera, ciclos de evaluación de aumentos y promociones cada 6 meses.
  • Hasta 11 días de vacaciones adicionales a lo legal. Esto para descansar y poder generar un sano equilibrio entre vida laboral y personal.
  • Participación en el bono por utilidades de la empresa, además de bonos por trabajador referido y por cliente.
  • Medio día libre de cumpleaños, además de un regalito.
  • Almuerzos quincenales pagados con el equipo en nuestros hubs (Santiago, Bogotá, Lima y Ciudad de Mexico).
  • Flexibilidad horaria y trabajo por objetivos.
  • Trabajo remoto, con posibilidad de hacerse híbrido (Oficina en Santiago de Chile, Cowork pagado en Bogotá, Lima y Ciudad de Mexico).
  • Post Natal extendido para hombres, y cobertura de diferencia pagado por sistema de salud para mujeres (Chile)

...y más!

Fully remote You can work from anywhere in the world.
Gross salary $2400 - 3000 Full time
ETL Automation Google Cloud Platform Data lake

En Coderslab.io trabajamos en un entorno de alta demanda tecnológica, con equipos globales que combinan talento de primer nivel. Nuestro cliente FIFTECH lidera iniciativas de datos avanzadas y está desarrollando el proyecto Datalake 2.0 en Colombia. Este rol se integra en el área de Data Factory dentro de la gerencia de Plataforma, Arquitectura y Data. El objetivo es fortalecer el procesamiento de datos en un entorno Big Data en Google Cloud Platform (GCP), contribuyendo a la evolución continua de nuestro Data Lake y a la entrega de información analítica confiable para decisiones estratégicas.

This job is available on Get on Board.

Funciones y responsabilidades

  • Analizar, diseñar, desarrollar y probar procesos de ingesta de datos (ELT) en entornos GCP Big Data.
  • Mantener y evolucionar procesos ETL/ELT, asegurando rendimiento, escalabilidad y fiabilidad.
  • Desarrollar pipelines de datos serverless y automatizar flujos de datos para operaciones analíticas.
  • Integrar, consolidar y limpiar datos para su uso en analítica y reporting.
  • Apoyar en arquitectura y diseño de plataformas de datos dentro de la unidad de Data Factory, colaborando con equipos multidisciplinarios.
  • Participar en la definición de estándares de modelado de datos y buenas prácticas de ingeniería de datos.

Perfil requerido y experiencia

Buscamos un Data Engineer Senior con sólido background en procesos ELT/ETL en entornos Big Data sobre GCP y Data Lake. Debe demostrar capacidad para diseñar e implementar pipelines data-driven, experiencia en desarrollo de pipelines serverless y automatización de procesos. Se valorará la habilidad para modelar y estructurar datos orientados a análisis, así como la capacidad de integrarse de forma proactiva en proyectos complejos, con enfoque técnico y colaborativo. Se espera autonomía, buena comunicación y capacidad de trabajar en un entorno de ritmo alto.

Requisitos deseables

Experiencia previa en Data Lake en GCP, con enfoque en ingesta y transformación de grandes volúmenes de datos. Conocimiento de herramientas de orquestación y automatización, como Airflow o Workflows de GCP. Habilidades para trabajar con equipos de Arquitectura y Producto, capacidades de análisis y resolución de problemas, y orientación a resultados. Se valorará experiencia en entornos multinacionales y trabajo remoto colaborativo.

Beneficios y condiciones

Contrato de plazo fijo con duración estimada de 6 meses. Salario entre 2.500.000 y 2.700.000 CLP, según experiencia. Equipo propio no provisto; se requiere PC/notebook personal. Ventajas de trabajar con un cliente líder en soluciones de datos y un equipo global de alto rendimiento, con oportunidades de aprendizaje y crecimiento en tecnologías de vanguardia. Modalidad remota con posibles coordinación en Colombia y región. Si te apasiona la ingeniería de datos y quieres contribuir a un Data Lake avanzado, te invitamos a aplicar y formar parte de nuestro equipo.

Fully remote You can work from anywhere in the world.
Gross salary $1000 - 1400 Full time
Data Analyst
  • Coderslab.io
  • Lima (Hybrid)
HTML5 Python Data Analysis BigQuery

CodersLab es una empresa dedicada al desarrollo de soluciones dentro del rubro IT y actualmente nos enfocamos en expandir nuestros equipos a nivel global para posicionar nuestros productos en más países de América Latina y es por ello que estamos en búsqueda de un Data Analyst

Buscamos un/a Data Analyst para unirse a nuestro equipo y participar en el desarrollo de aplicaciones móviles escalables, modernas y de alto impacto. Trabajarás en un entorno colaborativo, con proyectos desafiantes y oportunidades reales de crecimiento.

Apply to this job directly at getonbrd.com.

Funciones del cargo

  • Desarrollo de funcionalidad de gestión del canal con python y html5.
  • Documentación funcional de los desarrollos.
  • Carrera sistemas o relacionados.
  • Experiencia de cualquier sector; sin embargo, plus si tiene experiencia en el sector financiero.
  • Recopilar y limpiar datos: Obtener datos de diversas fuentes (bases de datos, redes sociales, hojas de cálculo, etc.) y luego limpiarlos, procesando valores faltantes, corrigiendo errores y eliminando inconsistencias para asegurar su calidad.
  • Analizar datos: Utilizar técnicas estadísticas y otras herramientas para identificar correlaciones, tendencias y patrones dentro de los conjuntos de datos.
  • Interpretar resultados: Analizar los resultados del análisis para entender qué significan y cómo pueden ayudar a la empresa a tomar mejores decisiones.
  • Comunicar hallazgos: Presentar los resultados del análisis de forma clara y comprensible a través de informes, paneles (dashboards) y visualizaciones (gráficos, tablas) para stakeholders y otros equipos.
  • Identificar riesgos y oportunidades: Detectar tendencias, problemas potenciales y oportunidades de crecimiento para la empresa.
  • Apoyar la toma de decisiones: Facilitar la toma de decisiones estratégicas en diferentes áreas de la organización, como ventas, inventario y gestión de servicios.
  • Crear informes y dashboards: Generar informes periódicos y dashboards interactivos que se actualizan automáticamente para mantener a todos informados.

Requerimientos del cargo

Experiencia entre 2 y 3 años

  • Experiencia en SQL Server
  • Experiencia en Python
  • Experiencia en BigQuery
  • Experiencia en Gitlab
  • Experiencia en ETLs
  • Experiencia en HTML5
  • Experiencia de cualquier sector; sin embargo, plus si tiene experiencia en el sector financiero.

Condiciones

Modalidad de contratación: Recibo por honorarios
Duración del proyecto: 6 meses
Modalidad: Hibrida (3 veces a oficina)

$$$ Full time
HTML5 Python Data Analysis BigQuery
En BC Tecnología diseñamos soluciones de TI para clientes de servicios financieros, seguros, retail y gobierno. Buscamos un Data Analyst para formar parte de un proyecto estratégico de Migración Digital enfocado en evolucionar el canal digital del cliente BFPE. Participarás en el desarrollo, documentación y mejora continua de funcionalidades para una nueva plataforma web, como parte de la migración desde Telegestor APK. El rol combina desarrollo, transformación de datos y optimización de procesos para entregar un canal digital robusto y escalable.
El proyecto implica trabajo conjunto con equipos técnicos y funcionales para asegurar requisitos, calidad y despliegues eficientes. Serás parte de un equipo ágil, con enfoque en innovación y mejora continua, contribuyendo a una migración exitosa que impacta directamente la operación digital.

Official job site: Get on Board.

Funciones

  • Desarrollar nuevas funcionalidades para el canal digital utilizando Python y HTML5.
  • Participar en la transformación de datos y construcción de pipelines ETL.
  • Analizar, diseñar y documentar especificaciones técnicas y funcionales.
  • Implementar consultas y procesos de explotación de datos en SQL Server y BigQuery.
  • Gestionar control de versiones y despliegues con GitLab.
  • Colaborar con equipos técnicos y funcionales para asegurar el cumplimiento de los requisitos.
  • Participar en pruebas, validaciones y despliegue de mejoras.
  • Proponer mejoras que fortalezcan la plataforma digital.

Descripción

Buscamos un Data Analyst con 2 a 3 años de experiencia en roles similares, con capacidad para trabajar en un entorno regulado y centrado en datos. Requisitos técnicos demostrables: SQL Server, Python, BigQuery, ETLs, GitLab y HTML5. Se valorará experiencia en banca, finanzas o industrias altamente reguladas. El/la candidata ideal será analítico/a, orientado/a a la calidad de datos, con habilidades para documentar y comunicar hallazgos de manera clara, y capaz de colaborar eficazmente con equipos multifuncionales. Se ofrece una modalidad híbrida en Lima y la posibilidad de participar en un proyecto estratégico con impacto directo en el canal digital del cliente, dentro de un entorno de aprendizaje y desarrollo continuo.

Deseable

Experiencia previa en proyectos de migración de plataformas digitales y en entornos de alta seguridad de información. Conocimiento de herramientas de visualización (Power BI, Tableau) y metodologías ágiles. Experiencia en integración de datos entre sistemas legados y plataformas modernas.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $1900 - 3000 Full time
Ingeniero de Datos
  • Microsystem
  • Santiago (Hybrid)
Python Git SQL Docker
En Microsystem estamos en búsqueda de personas talentosas y motivadas para formar parte de nuestro equipo Lab ⚗️. Con el objetivo de desarrollar infraestructuras de vanguardia tanto en la nube como On-Premise, necesitamos Ingenieros de Datos que tengan ganas de aprender los nuevos desafíos que trae la computación 😎.
Bajo la guía de un equipo joven y con experiencia 🤓, queremos explotar las tecnologías de punta que van apareciendo para así insertarnos en el mercado como una empresa que provea herramientas extraordinarias a sus clientes🚀 .
Día día nos enfrentamos a entender las necesidades del cliente, situarnos en su contexto y dar soluciones integrales a sus problemas mediante la computación💻. Por lo que si te gusta programar, este es tu lugar! Constantemente te verás desafiado por nuevas arquitecturas, servicios y herramientas a implementar, pero siempre con la ayuda de tu equipo 💪.

Apply directly on the original site at Get on Board.

Funciones del cargo

-Gestión de infraestructura de datos: principalmente en AWS.
-Desarrollo y optimización de procesos ETL: Implementación de pipelines de datos escalables que soporten grandes volúmenes de información y faciliten análisis complejos.
- Colaboración con equipo de Desarrollo: para diseñar soluciones de datos que respondan a las necesidades del negocio, asegurando calidad y confiabilidad.
- Desarrollo y mantenimiento de APIs y scripts en Python: para la manipulación de datos y la automatización de procesos.
- Gestión de repositorios de código con GitHub: y soporte a la integración continua (CI/CD).
- Uso de AWS CDK (Cloud Development Kit): para automatizar la infraestructura como código en la nube.
- Participación en reuniones técnicas y de negocio: para asegurar que las soluciones de datos cumplan con los objetivos de la organización.

Requerimientos del cargo

- 2 años de experiencia con Título profesional de Ingeniero Civil Informática/Computación, Industrial o 5 años de experiencia en áreas vinculadas.
-
Conocimientos en lenguajes:
- Python (Avanzado)
- JavaScript (Intermedio)
- SQL (Avanzado)
- Conocimientos en Contenedores (Docker)
- Conocimientos en Nube AWS (Intermedio ).
- Git

Opcionales

- Terraform ✨
- Elastic Search ✨
- Kubernetes ✨
- Kafka ✨
- Linux ✨
- Java Spring Boot (Intermedio)

Condiciones

💫Nuestro equipo ofrece soluciones completas, por lo que te tocará aprender desde cómo levantar un servicio a cómo ir mejorándolo.
🧔‍♀️Al ser una área dentro de una empresa más grande y madura, existen muchos colaboradores dispuestos a compartir su experiencia y ayudar en el aprendizaje.
😎Somos relajados, entendemos si tienes alguna urgencia durante el día o trámites que hacer, lo que nos importa es que te puedas comprometer con las entregas.
🧑‍🎤 Puedes venir como quieras a la oficina, mientras tengas algo puesto.
🏋🏻 Tenemos convenios con gimnasios.

Accessible An infrastructure adequate for people with special mobility needs.
Internal talks Microsystem offers space for internal talks or presentations during working hours.
Life insurance Microsystem pays or copays life insurance for employees.
Meals provided Microsystem provides free lunch and/or other kinds of meals.
Paid sick days Sick leave is compensated (limits might apply).
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Computer provided Microsystem provides a computer for your work.
Fitness subsidies Microsystem offers stipends for sports or fitness programs.
Informal dress code No dress code is enforced.
Beverages and snacks Microsystem offers beverages and snacks for free consumption.
Gross salary $1400 - 2000 Full time
Python SQL Business Intelligence Power BI
En MAS Analytics creemos que los datos son el motor de la transformación empresarial. Nuestro equipo trabaja mano a mano con organizaciones líderes para convertir información en decisiones estratégicas, utilizando tecnologías de los players más importantes del mercado (AWS, Azure y GCP). Como parte de este equipo, tendrás la oportunidad de participar en proyectos que combinan ingeniería de datos, analítica avanzada y consultoría, en un entorno colaborativo donde la innovación y el aprendizaje continuo son parte del día a día.
Porque aquí no solo desarrollarás tecnología, sino también tu carrera. Te ofrecemos un espacio para crecer, aprender y participar en proyectos que marcan la diferencia en la transformación digital de las empresas. Serás parte de una cultura que valora la innovación, la colaboración y el impacto real.

Apply to this job through Get on Board.

Funciones del cargo

Tu misión será dar vida a soluciones que permitan a nuestros clientes aprovechar al máximo el valor de sus datos. Participarás en la creación de pipelines que aseguren la integración y calidad de la información, diseñarás modelos de datos eficientes y colaborarás en la construcción de arquitecturas en entornos cloud como AWS, GCP o Azure.
Además, serás responsable de desarrollar reportes y dashboards que faciliten la toma de decisiones estratégicas.
Pero este rol va más allá de lo técnico: trabajarás directamente con los clientes, entendiendo sus necesidades, participando en el levantamiento de requerimientos y proponiendo soluciones que impacten en su negocio.
Serás parte de un equipo que no solo entrega tecnología, sino también confianza y valor.

Requerimientos del cargo

Queremos a alguien con hasta un año de experiencia en análisis o ingeniería de datos, con conocimientos en SQL, modelado de datos y herramientas de BI como Power BI o Tableau. Valoramos que tengas nociones en programación (Python o R) y experiencia básica en entornos cloud.
Más allá de lo técnico, buscamos una persona curiosa, proactiva y con habilidades para comunicarse con clientes, capaz de planificar y organizar su trabajo en proyectos dinámicos.
Si te apasiona aprender, resolver problemas y trabajar en equipo, este rol es para ti.

Condiciones

  1. Horario flexible
  2. Ropa relajada 😍😍
  3. Financiamiento de cursos, certificaciones y capacitaciones👨‍🎓👩‍🎓
  4. Ambiente laboral joven
  5. Viernes medio día
  6. Vacaciones extra 🌞🏝
  7. Celebraciones de cumpleaños 🎁🎊
  8. Actividades de empresa 🍻⚽
Y muchos más que podrás conocer…

Library Access to a library of physical books.
Accessible An infrastructure adequate for people with special mobility needs.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Internal talks MAS Analytics offers space for internal talks or presentations during working hours.
Life insurance MAS Analytics pays or copays life insurance for employees.
Partially remote You can work from your home some days a week.
Bicycle parking You can park your bicycle for free inside the premises.
Digital library Access to digital books or subscriptions.
Health coverage MAS Analytics pays or copays health insurance for employees.
Company retreats Team-building activities outside the premises.
Computer repairs MAS Analytics covers some computer repair expenses.
Dental insurance MAS Analytics pays or copays dental insurance for employees.
Computer provided MAS Analytics provides a computer for your work.
Performance bonus Extra compensation is offered upon meeting performance goals.
Vacation over legal MAS Analytics gives you paid vacations over the legal minimum.
Vacation on birthday Your birthday counts as an extra day of vacation.
$$$ Full time
Data Engineer
  • BC Tecnología
  • Santiago (Hybrid)
Python SQL NoSQL ETL
BC Tecnología es una consultora de TI que gestiona portafolio, desarrolla proyectos y ofrece servicios de outsourcing y selección de profesionales. Nuestro enfoque es crear equipos ágiles para Infraestructura, Desarrollo de Software y Unidades de Negocio, trabajando con clientes de servicios financieros, seguros, retail y gobierno. Participarás en proyectos innovadores para clientes de alto nivel, con un equipo multidisciplinario y una cultura de aprendizaje y crecimiento profesional. Formarás parte de una organización que prioriza la calidad, la seguridad y la gobernanza de datos mientras impulsa soluciones de alto impacto para la toma de decisiones estratégicas.

Find this vacancy on Get on Board.

Funciones

  • Diseñar, construir y mantener pipelines de datos (ETL / Data Pipelines) robustos, escalables y eficientes para procesar grandes volúmenes de información de múltiples fuentes.
  • Gestionar la ingesta, procesamiento, transformación y almacenamiento de datos estructurados y no estructurados.
  • Implementar soluciones de ingeniería de datos en entornos cloud, con preferencia por AWS (Glue, Redshift, S3, etc.).
  • Traducir necesidades de negocio en requerimientos técnicos viables y sostenibles.
  • Colaborar con equipos multidisciplinarios (negocio, analítica, TI) para entregar soluciones de valor.
  • Aplicar buenas prácticas de desarrollo, seguridad, calidad y gob (governance) de datos; versionado de código, pruebas y documentación.
  • Participar en comunidades de datos, promover mejoras continuas y mantener la documentación actualizada.

Requisitos y habilidades

Estamos buscando un Data Engineer con al menos 3 años de experiencia en roles de Ingeniería de Datos y experiencia comprobable en ETLs y arquitecturas de datos en la nube. El candidato ideal debe haber trabajado en entornos ágiles y tener experiencia en proyectos de Retail o sectores afines (deseable).
Conocimientos técnicos requeridos:
  • Cloud Computing: AWS (Glue, Redshift, S3, entre otros).
  • Orquestación de pipelines: Apache Airflow.
  • Lenguajes de programación: Python (preferente) o Java.
  • Almacenamiento de datos: SQL, NoSQL, Data Warehouses.
  • Buenas prácticas: control de versiones, pruebas y documentación.
Competencias blandas: orientación al cliente, capacidad de trabajar en equipos multidisciplinarios, proactividad, pensamiento analítico y habilidades de comunicación para traducir requerimientos técnicos a negocio.

Deseables

Experiencia en Retail y en proyectos con gobernanza y cumplimiento de datos, experiencia con herramientas de visualización y analítica, conocimiento de seguridad de datos y cumplimiento normativo, y experiencia en migraciones o modernización de data stacks.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Gross salary $3500 - 5200 Full time
Python SQL Machine Learning AI Integration

Vequity is building the world’s most robust, contextualized buyer intelligence network for investment banks, private equity firms, and strategic acquirers. Our platform currently houses over 1.5 million buyer profiles with approximately 100 structured and inferred data fields per profile. We leverage proprietary AI agents to continuously enrich, infer, and structure buyer intelligence at scale. As a Senior Data Engineer, you will own the architecture, quality, and scalability of our data ecosystem—from ingestion and cleaning to inference and output generation. You will partner with AI, product, and engineering teams to deliver data APIs and feeds that power our platform's decision-support capabilities. Your work will directly impact data reliability, operational efficiency, and the precision of buyer attributes used across our customers.

Apply at getonbrd.com without intermediaries.

Key Responsibilities

Multi-Source Data Architecture

  • Work with systems handling multiple write paths: external providers, LLM hygiene agents, and customer-claimed edits
  • Define standards for data versioning, lineage, and observability across pipelines


Entity Lifecycle & Master Data Management

  • Handle entity lifecycle complexity: mergers, acquisitions, spin-offs, rebranding, and temporal relationship changes
  • Design entity resolution systems using deterministic blocking (fuzzy matching, location) combined with LLM-based evaluation for match decisions
  • Build confidence scoring models and surface low-confidence cases for human review

Machine Learning & Matching Systems

  • Work with embeddings infrastructure: vector generation, retrieval optimization, and quality measurement
  • Optimize semantic search pipelines including embedding strategies, namespace design, and reranking
  • Establish evaluation frameworks to measure model performance against human judgment

Collaboration & Team Development

  • Educate and mentor the engineering team on data best practices, patterns, and common pitfalls
  • Lead continuous improvement of the data infrastructure roadmap

Relationship & Graph Modeling

  • Design data models for complex relationships: parent/subsidiary hierarchies, PE firm → portfolio company chains
  • Evaluate and implement graph query capabilities (Apache AGE, Neo4j, or optimized Postgres patterns) for relationship traversal that semantic search cannot address

Data Quality, Testing & Operations

  • Build quality-control layers including confidence scoring, human-in-the-loop validation, and automated anomaly detection
  • Implement testing strategies including data contracts, pipeline unit tests, and integration testing
  • Build proactive monitoring, alerting, and runbooks for data health issues
  • Ensure compliance with data governance, privacy, and security standards

Description

  • 5+ years in data engineering with strong Python (Pydantic a bonus), SQL, and cloud data stacks (including GCP)
  • Experience with orchestration frameworks (Airflow, Dagster, Prefect) and/or data platforms (Databricks)
  • Experience designing or integrating AI/LLM agents for data enrichment with structured AI → JSON → database pipelines including error recovery and monitoring
  • Understanding of embedding-based retrieval
  • Excellent communication and cross-team collaboration skills

Desirable

  • Prior experience with Machine Learning algorithms / semantic search
  • Prior experience with entity resolution or master data management — you understand why matching company records is fundamentally hard
  • Familiarity with graph databases or graph query patterns (Neo4j, Apache AGE, recursive CTEs) for complex entity relationships
  • Experience with event sourcing or append-only architectures for audit trails and data replay
  • Background in investment data, market intelligence, or deal sourcing platforms
  • Familiarity with agent orchestration tools (LangChain, LlamaIndex) and data quality frameworks (dbt, Great Expectations)
  • Experience as an early/first data hire at a startup
  • Experience designing or integrating AI/LLM agents for data enrichment with structured AI → JSON → database pipelines including error recovery and monitoring
  • Understanding of prompt engineering, MCP Servers, function calling, and embedding-based retrieval

Benefits

Competitive compensation and Paid Time Off (PTO).

Fully remote You can work from anywhere in the world.
$$$ Full time
Data Engineer
  • BC Tecnología
  • Santiago (Hybrid)
Python SQL Apache Spark CI/CD
BC Tecnología es una consulta de TI especializada en la gestión de portafolios, desarrollo de proyectos y outsourcing de personal para áreas de Infraestructura, Desarrollo de Software y Unidades de Negocio. Enfocada en clientes de servicios financieros, seguros, retail y gobierno, la empresa promueve soluciones a través de metodologías ágiles y un marco de cambio organizacional centrado en el desarrollo de productos. El Data Engineer se integrará a proyectos desafiantes que buscan optimizar flujos de datos, gobernanza y escalabilidad, apoyando a clientes con alto estándar de calidad y buscando mejoras continuas en procesos y pipelines.

© getonbrd.com.

Funciones principales

  • Diseñar y construir pipelines eficientes para mover y transformar datos, asegurando rendimiento y escalabilidad.
  • Garantizar consistencia y confiabilidad mediante pruebas unitarias y validaciones de calidad de datos.
  • Implementar flujos CI/CD para ambientes de desarrollo y producción, promoviendo buenas prácticas de DevOps.
  • Diseñar pipelines avanzados aplicando patrones de resiliencia, idempotencia y event-driven.
  • Contribuir al gobierno de datos mediante metadata, catálogos y linaje.
  • Colaborar con líderes técnicos y arquitectos para definir estándares, guías y mejoras de procesos.
  • Alinear soluciones técnicas con requerimientos de negocio y metas de entrega.
  • Apoyarse en líderes técnicos para lineamientos y mejores prácticas del equipo.

Requisitos y experiencia

Buscamos un Data Engineer con al menos 2 años de experiencia en el diseño y construcción de pipelines de datos. Debe poseer dominio avanzado de Python, Spark y SQL, y experiencia trabajando en el ecosistema AWS (Glue, S3, Redshift, Lambda, MWAA, entre otros). Es deseable experiencia con lakehouses (Delta Lake, Iceberg, Hudi) y conocimientos en CI/CD (Git) y control de versiones. Se valorará experiencia previa en entornos de retail y en proyectos de calidad y gobierno de datos, así como experiencia en desarrollo de integraciones desde/hacia APIs y uso de IaC (Terraform).
Se requieren habilidades de comunicación efectiva, trabajo en equipo y proactividad. Se valorará capacidad de aprendizaje, colaboración entre equipos y enfoque en resultados en un entorno dinámico y con clientes de alto nivel.

Requisitos deseables

Experiencia previa en retail o sectores regulados. Conocimiento en calidad y gobierno de datos. Experiencia en desarrollo de integraciones desde/hacia APIs. Conocimientos en herramientas de orquestación y monitoreo de pipelines. Familiaridad con buenas prácticas de seguridad de datos y cumplimiento normativo. Capacidad para comunicar conceptos técnicos a audiencias no técnicas y para fomentar una cultura de mejora continua.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

$$$ Full time
Data Engineer
  • BC Tecnología
  • Santiago (Hybrid)
Python Microstrategy ETL SQL Server
BC Tecnología, una consultora de TI especializada en servicios IT y soluciones de negocio, busca un Data Engineer para un proyecto híbrido ubicado en Las Condes, Santiago. El/la profesional se integrará a un equipo de BI/Analytics para desarrollar, optimizar y mantener soluciones de datos en entornos analíticos, trabajando con clientes de alto nivel en sectores como finanzas, seguros, retail y gobierno. El proyecto implica colaborar con equipos de BI, Analytics y TI, contribuuyendo a la implementación de pipelines de datos, modelado y generación de reports y dashboards de valor estratégico.

Apply to this job from Get on Board.

Funciones del rol

  • Desarrollar y optimizar consultas y modelos en SQL Server para soportar reporting analítico.
  • Diseñar, implementar y mantener pipelines de datos, integrando fuentes en plataformas en la nube (AWS, Azure o GCP).
  • Desarrollar y mantener reportes y dashboards en MicroStrategy para usuarios de negocio.
  • Colaborar con equipos de BI, Analytics y TI para entender requerimientos y entregar soluciones eficientes.
  • Identificar mejoras de rendimiento, escalabilidad y calidad de datos; aplicar buenas prácticas de gobierno de datos.

Requisitos y perfil

  • Al menos 3 años de experiencia como Data Engineer o BI Engineer.
  • Experiencia comprobable en SQL Server y MicroStrategy.
  • Experiencia trabajando con alguna nube (AWS, Azure o GCP).
  • Capacidad para trabajar en entornos colaborativos, orientado a resultados y con buena comunicación con stakeholders.
  • Conocimiento en conceptos de modelado de datos, extracción, transformación y carga (ETL/ELT) y buenas prácticas de calidad de datos.

Skills and assets

  • Certificaciones en SQL Server, Data Platform o tecnologías de nube asociadas.
  • Con experiencia en herramientas de visualización y dashboards además de MicroStrategy.
  • Conocimientos de Python o lenguajes de scripting para transformaciones de datos.
  • Actitud proactiva, pensamiento analítico y capacidad para trabajar de forma autónoma en entornos dinámicos.

Beneficios

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Health coverage BC Tecnología pays or copays health insurance for employees.
Computer provided BC Tecnología provides a computer for your work.
$$$ Full time
Data Engineer AWS
  • BC Tecnología
SQL Big Data AWS Lambda Data Architecture
En BC Tecnología buscamos Data Engineer AWS para colaborar en proyectos de alto impacto para clientes en sectores como servicios financieros, seguros, retail y gobierno. Nuestro equipo, parte de una consultora de TI con enfoque en soluciones innovadoras, trabaja en entornos de Big Data y nube, diseñando y operando infraestructuras escalables para procesamiento de datos y analítica avanzada. Participarás en proyectos de migración, diseño de pipelines de datos, implementación de soluciones en AWS y operaciones de datos, con un enfoque en calidad, seguridad y cumplimiento. Formarás parte de un equipo ágil que impulsa soluciones orientadas al negocio y la eficiencia operativa.

This job is published by getonbrd.com.

Funciones principales

  • Diseñar, construir y mantener pipelines de datos en entornos AWS (Glue, Lambda, Step Functions, Redshift, Athena, Lake Formation).
  • Gestionar arquitectura de datos y clústeres, asegurando rendimiento, escalabilidad y seguridad de la información.
  • Implementar políticas IAM y controles de acceso, garantizando cumplimiento y buenas prácticas de seguridad.
  • Colaborar con científicos de datos y equipos de negocio para transformar requerimientos en soluciones técnicas eficientes.
  • Participar en la mejora continua de procesos, automatización y monitoreo de flujos de datos.

Perfil y habilidades

  • Experiencia mínima de 2 años como Data Engineer, preferentemente en entornos Big Data y nube AWS.
  • Conocimientos en AWS: Glue, Lambda, Step Functions, Redshift, Lake Formation, SQL, Athena y gestión de políticas IAM.
  • Experiencia con bases de datos y arquitecturas de clústeres; capacidad para optimizar rendimiento y costos.
  • Fuerte capacidad de resolución de problemas, pensamiento analítico y orientación a resultados.
  • Buen comunicador, capaz de trabajar en equipos ágiles y adaptar soluciones a requerimientos de negocio.
  • Idiomas: español; se valoran habilidades en inglés técnico.

Requisitos Deseables

  • Certificaciones AWS (por ejemplo, AWS Data Analytics, AWS Solutions Architect).
  • Experiencia en orquestación de datos y herramientas de orquestación adicional (por ejemplo, Step Functions, Airflow).
  • Conocimientos en seguridad de datos, cumplimiento normativo y buenas prácticas de DevOps/DataOps.
  • Experiencia en proyectos de migración de datos, tratamiento de datos sensibles y observabilidad de pipelines.

Beneficios y entorno

En BC Tecnología promovemos un ambiente de trabajo colaborativo que valora el compromiso y el aprendizaje constante. Nuestra cultura se orienta al crecimiento profesional a través de la integración y el intercambio de conocimientos entre equipos.
La modalidad híbrida que ofrecemos, ubicada en Las Condes, permite combinar la flexibilidad del trabajo remoto con la colaboración presencial, facilitando un mejor equilibrio y dinamismo laboral.
Participarás en proyectos innovadores con clientes de alto nivel y sectores diversos, en un entorno que fomenta la inclusión, el respeto y el desarrollo técnico y profesional.

Fully remote You can work from anywhere in the world.
Health coverage BC Tecnología pays or copays health insurance for employees.
Computer provided BC Tecnología provides a computer for your work.
Gross salary $2800 - 3600 Full time
Data Engineer
  • Checkr
  • Santiago (Hybrid)
Python SQL Kubernetes CI/CD
Checkr está expandiendo su centro de innovación en Santiago para impulsar la precisión y la inteligencia de su motor de verificaciones de antecedentes a escala global. Este equipo colabora estrechamente con las oficinas de EE. UU. para optimizar el motor de selección, detectar fraude, y evolucionar la plataforma con modelos GenAI. El candidato seleccionado formará parte de un esfuerzo estratégico para equilibrar velocidad, costo y precisión, impactando millones de candidatos y mejorando la experiencia de clientes y socios. El rol implica liderar iniciativas de optimización, diseño de estrategias analíticas y desarrollo de modelos predictivos dentro de una pila tecnológica de alto rendimiento.

Apply at the original job on getonbrd.com.

Responsabilidades del cargo

  • Crear, mantener y optimizar canales de datos críticos que sirvan de base para la plataforma y los productos de datos de Checkr.
  • Crear herramientas que ayuden a optimizar la gestión y el funcionamiento de nuestro ecosistema de datos.
  • Diseñar sistemas escalables y seguros para hacer frente al enorme flujo de datos a medida que Checkr sigue creciendo.
  • Diseñar sistemas que permitan flujos de trabajo de aprendizaje automático repetibles y escalables.
  • Identificar aplicaciones innovadoras de los datos que puedan dar lugar a nuevos productos o conocimientos y permitir a otros equipos de Checkr maximizar su propio impacto.

Calificaciones y Requisitos del cargo

  • Más de dos años de experiencia en el sector en un puesto relacionado con la ingeniería de datos o backend y una licenciatura o experiencia equivalente.
  • Experiencia en programación en Python o SQL. Se requiere dominio de uno de ellos y, como mínimo, experiencia en el otro.
  • Experiencia en el desarrollo y mantenimiento de servicios de datos de producción.
  • Experiencia en modelado, seguridad y gobernanza de datos.
  • Familiaridad con las prácticas y herramientas modernas de CI/CD (por ejemplo, GitLab y Kubernetes).
  • Experiencia y pasión por la tutoría de otros ingenieros de datos.

Condiciones del Cargo

  • Un entorno de colaboración y rápido movimiento
  • Formar parte de una empresa internacional con sede en Estados Unidos
  • Asignación de reembolso por aprendizaje y desarrollo
  • Remuneración competitiva y oportunidades de promoción profesional y personal
  • Cobertura médica, dental y oftalmológica del 100% para empleados y dependientes
  • Vacaciones adicionales de 5 días y flexibilidad para tomarse tiempo libre
  • Reembolso de equipos para trabajar desde casa
En Checkr, creemos que un entorno de trabajo híbrido fortalece la colaboración, impulsa la innovación y fomenta la conexión. Nuestras sedes principales son Denver, CO, San Francisco, CA, y Santiago, Chile.
Igualdad de oportunidades laborales en Checkr

Checkr se compromete a contratar a personas cualificadas y con talento de diversos orígenes para todos sus puestos tecnológicos, no tecnológicos y de liderazgo. Checkr cree que la reunión y celebración de orígenes, cualidades y culturas únicas enriquece el lugar de trabajo.

Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Partially remote You can work from your home some days a week.
Health coverage Checkr pays or copays health insurance for employees.
Computer provided Checkr provides a computer for your work.
Informal dress code No dress code is enforced.
Vacation over legal Checkr gives you paid vacations over the legal minimum.
Beverages and snacks Checkr offers beverages and snacks for free consumption.

Sobre trabajos de Data Engineering

Remote Data Engineering jobs. Data pipelines, ETL, data architecture and big data. En RemoteJobs.lat conectamos a profesionales de Latinoamerica con empresas que ofrecen trabajo 100% remoto. Todas nuestras ofertas permiten trabajar desde cualquier ciudad, con pagos en dolares o moneda internacional.

Rango salarial

$4,000 - $11,000 USD/mes

Posiciones abiertas

327

Ubicacion

100% Remoto LATAM

Tip: Tambien puedes buscar ofertas en skills relacionados como Python, SQL,

Data Engineering salary ranges by seniority

Estimated ranges in USD/month for remote contracts with international companies. Vary by company, complementary stack and client location.

Level Years of experience Range USD/month
Junior 0-2 $4,000 - $5,750
Mid-level 2-4 $5,400 - $7,850
Senior 4-7 $7,500 - $9,950
Lead/Staff 7+ $9,250 - $11,000

Companies hiring remote Data Engineering from LATAM

Some companies that have historically hired Data Engineering profiles to work 100% remotely from Latin America:

Mercado Libre Globant Auth0 Nubank Cloudwalk Stripe GitLab Crossover Toptal

Frequently asked questions

The typical range for a remote Data Engineering working for international companies is $4,000 - $11,000 USD/mes. The exact amount depends on seniority, the company's country, and whether the contract is full-time or project-based.

The most in-demand Data Engineering profiles usually combine Python, Sql, Spark. Adding one of these opens more job offers and often increases salary range by 15% to 30%.

For US/EU companies yes: B2 minimum for technical interviews. There are alternatives at LATAM companies (Mercado Libre, Globant, Rappi) or agencies like Toptal where intermediate English is enough to start.

The 3 highest-impact things: (1) a public GitHub with 2-3 solid projects relevant to Data Engineering, (2) an English LinkedIn profile optimized for recruiters, and (3) applying to 20+ offers per week instead of 2-3.