Reveal Shares Key Takeaways from JSM

This week, Reveal was honored to both present original research and actively participate in the critical conversations shaping the future of statistics, AI, and data science at the Joint Statistical Meetings (JSM) 2025 in Nashville, TN. From differential privacy to Large Language Model (LLM) adoption, our team engaged with thought leaders across sectors to ensure we continue delivering the best thinking, tools, and strategy to our clients. The 2025 JSM is the largest gathering of statisticians held in North America. Below are our key takeaways along with solutions we are providing in these areas.

1. Differential Privacy: Not Just a Tech Problem

Balancing privacy and utility (especially with AI!) is a people and process challenge, not just a technical one. Cultural alignment and workflow design are just as important as algorithm selection. (We’re on it! Change management and culture design are baked into all of our solutions and offerings.)

2. Synthetic Data Can Enable Powerful Data Integrations

Research into how synthetic data can impact differential privacy requirements shows promising growth. Preserving statistical properties of the target data has challenges but, when accomplished, it enables greater data sharing across organizations and can enable more timely and more secure releases of information.  (We’re on it! Read more about our synthetic data applications here.)

3. LLMs Are Tools, Not End-to-End Solutions

Despite their popularity, Large Language Models (LLMs) aren't plug-and-play replacements. They’re one part of a broader pipeline, always bracketed by human evaluation and decision-making. Areas with unmet needs (e.g., rare disease diagnostics) show higher openness to early adoption than mature, stable domains. (We’re on it! Take a look how we improved the American Community Survey Autocoder with LLMs here).

4. Uncertainty Quantification: Everyone's Grappling With It

Whether in survey science or biostatistics, a shared concern emerged: how to trust LLM outputs. Promising methods include cross-run consistency tests, outcome distribution analysis, and sampled task-level subject matter expert evaluations. Without baked in evaluation frameworks, LLM use in your processes are "buyer beware!"(We’re on it! Read more about our mixed methods approach to AI-assisted survey translation here).

5. Momentum Toward Open Source Is Accelerating

From federal agencies to pharma-adjacent organizations, many are looking to migrate away from SAS. Reveal is ahead of the curve here, with tested methods and tooling that simplify the path to modern, open infrastructure. (We’re on it! Read more about our SAS to Python work here).

6. Better Questions, Better Outcomes

A call to action for consultants: define sharper, measurable research questions from the outset. Doing so builds clearer alignment between stakeholders and statistically grounded outcomes. (We’re on it! We recently launched the Reveal Innovation Lab to house, foster, and expand programs that create new solutions for the company and its clients).

7. Ethical AI: Beyond the Consent Form

As AI enters sensitive spaces, traditional informed consent is under pressure. Sessions highlighted new models for transparency, participant understanding, and responsible system deployment. (We’re on it! Governance and ethics are at the heart of everything we do. Read more about our capabilities here).

Interested in learning more about our takeaways from JSM or exploring ways we can help you apply these findings? Let’s chat! Please email us at office@revealgc.com

Next
Next

Reveal Supports Renowned Open Innovation Program