AI-Enhanced Audit and Correspondence Experiments - A Short Course
A 4-Day Livestream Seminar Taught by Charles Crabtree, Ph.D.
Design, Inference, and Validity in the Age of AI Agents
Audit and correspondence experiments remain among the most powerful tools for studying discrimination, bias, and social boundaries. AI agents create new opportunities for this work by helping researchers scale correspondence, standardize stimulus construction, manage treatment delivery, and coordinate fieldwork. But these opportunities come with real risks. If used carelessly, AI agents can introduce unintended variation, alter the meaning of signals, and weaken the identifying assumptions that make audit studies persuasive in the first place.
This seminar is built around a simple premise: in audit studies, design discipline matters more than tool sophistication. The goal is not to automate correspondence experiments for its own sake. It is to show researchers how to use AI agents in narrow, controlled, and methodologically defensible ways that strengthen rather than undermine experimental leverage. You will learn how to integrate agentic systems into audit-study workflows while preserving information equivalence, signal control, interpretability, and transparency.
This is a method-specific seminar grounded in the logic of audit and correspondence experiments. It is not a general course on AI for social science, prompt engineering, or automated text generation. Instead, it focuses on the distinctive design problems that emerge when AI agents are introduced into studies where small differences in wording, style, presentation, timing, sequence, or delivery can fundamentally alter causal interpretations.
Starting August 18, this seminar will be presented as a 4-day synchronous, livestream workshop via Zoom. Each day will feature two lecture sessions with hands-on exercises, separated by a 1-hour break. Live attendance is recommended for the best experience. If you can’t join in real time, recordings will be available within 24 hours and accessible for four weeks after the seminar.
Closed captioning is available for all live and recorded sessions. Captions can be translated to a variety of languages including Spanish, Korean, and Italian. For more information, click here.
ECTS Equivalent Points: 1
More Details About the Course Content
By the end of the course, you will be able to:
-
- Design audit and correspondence experiments with a clear understanding of identification, signal minimalism, and information equivalence.
- Distinguish which parts of an audit-study workflow can be responsibly delegated to AI agents and which require direct human control.
- Use AI agents to help construct tightly controlled signals, including names, emails, résumés, cover letters, and application materials.
- Recognize how agent-produced text and behavior can introduce unintended class, racial, gendered, cultural, or stylistic cues.
- Develop validation and pre-testing protocols tailored to AI-assisted correspondence.
- Evaluate whether agent-generated variation functions as a meaningful treatment signal or as uncontrolled noise.
- Use AI agents to support the fielding of audit studies, including structured treatment delivery, message sequencing, follow-up management, and instrument administration, without compromising design integrity.
- Distinguish between bias embedded in the agentic system and bias revealed through field responses.
- Clearly document the use of AI agents for replication, peer review, and ethical oversight.
Computing
To participate fully, you’ll need to download and set up Warp, a terminal-driven agent interface. No programming setup is required, though you’re welcome to use a laptop and a current project to apply the workflows to your own work.
Who Should Register?
This seminar is designed for:
-
- Researchers who use or plan to use audit and correspondence experiments in studies of discrimination, gatekeeping, or social evaluation.
- Faculty members, graduate students, and applied researchers interested in integrating AI agents into experimental design without sacrificing causal clarity.
- Scholars who want practical guidance on using AI agents to support stimulus generation, treatment delivery, and study implementation while maintaining control over equivalence and interpretation.
- Research teams seeking transparent, defensible workflows for AI-assisted audit studies.
This course is not for those seeking a general introduction to AI, chatbot-based experimentation, or AI-assisted statistical analysis. It is not intended as a course on fully autonomous field experiments. The emphasis throughout is on careful design, bounded delegation, and credible inference.
Outline
Audit-study logic and why AI agents create both promise and risk
-
- What audit and correspondence experiments identify
- Signal minimalism and the logic of controlled difference
- Information equivalence and why it is easy to break
- How AI agents can help and how they can distort
- The difference between useful augmentation and uncontrolled variation
Using AI agents to generate correspondence without losing design control
-
- Constrained stimulus generation with AI agents
- Building templates for names, résumés, emails, and application materials
- Iterative refinement with human oversight
- Detecting unintended cues in style, fluency, tone, and content
- Designing pre-tests and diagnostics for agent-assisted materials
Using AI agents to field studies: treatment and instrument delivery
-
- When AI agents can be used to deliver treatments or administer instruments in correspondence designs
- Standardization versus responsiveness in agent-mediated delivery
- Risks introduced by adaptive behavior at the point of contact
- Building bounded workflows for message delivery, follow-up, and interaction management
- Logging, auditing, and validating what was actually delivered in the field
Inference, ethics, and reviewer-proofing AI-enhanced audit studies
-
- Conducting audit studies when stimuli or delivery mechanisms are partly agent-assisted
- Interpreting results when the agentic system may itself introduce bias
- Separating treatment effects from agent artifacts
- IRB review, ethical burden, and institutional concerns
- Writing methods sections that anticipate skepticism from reviewers and editors
Seminar Information
Tuesday, August 18 –
Friday, August 21, 2026
Schedule: All sessions are held live via Zoom. All times are ET (New York time).
10:30am-12:30pm (convert to your local time)
1:30pm-3:00pm
Payment Information
The fee of $995 USD includes all course materials.
PayPal and all major credit cards are accepted.
Our Tax ID number is 26-4576270.

Back to Public Seminars