Chalk x ODSC Meetup Recap 2026

Linda Zhou - Marketing Manager
by Linda Zhou
February 2, 2026

Last week, we kicked off 2026 with ODSC's first San Francisco meetup of the year at our new Union Square office! We had ML engineers from fintech, e-commerce, and enterprise software come through to hear our co-founder Elliot Marx talk about why most marketplace recommender systems fail in production, and what to do about it.

Ellen from ODSC introducing Elliot, kicking off the meetup

Spoiler: it's not a model quality problem.

The Core Problem

Most production recommenders rely on batch pipelines and precomputed features. For marketplaces, this creates two ceilings on relevance: you can't precompute every user-item combination, so you have to guess which ones to score. And even for the pairs you do precompute, the features go stale before decision time. You're either missing users or serving them outdated context, often both.

The Live Demo

Elliot's talk centered on a different approach: computing features on demand in real-time.

Elliot demoing to a room of engineers

Think of it less like a cache-first feature store and more like a query execution engine: determining at request time which resolvers to run, which data sources to hit, and how to join the results under tight latency budgets.

Elliot's demo walked through the mechanics:

Writing resolvers: He showed how to define new features in Python, like a name_email_match_score that computes similarity between a user's name and email. Resolvers declare their dependencies in function signatures, making the feature graph explicit.

Query planning and execution: When a feature is requested, Chalk generates a query plan and determines the optimal execution path. It automatically pushes filters down to your data sources and compiles Python resolver logic to C++ to hit single-digit millisecond latencies at production scale.

Joining across data sources: The system queries across Snowflake, Postgres, and online stores at request time, so you don't need to denormalize everything upfront.

Temporal consistency: For training data, Chalk performs point-in-time lookups so your model never trains on features that wouldn't have existed at prediction time, preventing data leakage when backfilling new features.

The Q&A ran well past schedule: engineers stuck around with drinks and bites, digging into query optimization, handling highly dynamic features, and how this architecture compares to their current setups.

What Resonated

The room was full of people working on similar problems: keeping features fresh under tight latency constraints and dealing with multi-entity relationships in marketplaces.

When you treat recommender systems as query execution problems instead of offline prediction pipelines, this means:

  • No more guessing which user-item pairs to precompute
  • No more stale features because batch jobs run on a schedule
  • One codebase for training and serving
  • Richer cross-entity signals because you can join at decision time instead of flattening relationships upfront

For teams running marketplace recommenders, fraud systems, or any high-frequency decision system where context changes constantly, this reframing changes what's architecturally possible!

What's Next

This was the first of many events we're planning in our SF office. If you're working on production ML systems and want to see Chalk in action, book a demo to walk through how decision-time feature computation could work for your use case.

Thanks to everyone who came through, and to ODSC for co-hosting a great start to the year!

Want to stay up-to-date with Chalk?

Subscribe for updates on what we’re building (and shipping!) at Chalk