White Papers
Original research and applied ethics publications establishing the conceptual foundation of Aluma's AI-integrated practice frameworks.
About This Research
These papers are not marketing materials or general thought leadership. They represent structured, evolving contributions to the field of ethical AI, designed to be implemented, referenced, and built upon in real-world environments.
The work presented here reflects an ongoing body of research. Foundational editions document the original development of core frameworks, while revised editions incorporate expanded concepts, applied insights, and advancements in AI ethics, governance, and human-AI interaction.
Each publication is intended to support the preservation of human judgment in AI-mediated environments — at the level of individual decision-making, organizational practice, and system design.
Available Publications
AI-Integrated Ethical Practice™Preserving Human Judgment in AI-Mediated Professional Environments
Audience: Social workers, clinicians, educators, compliance leaders, AI designers
Expands the original ARP, AIRP, and Micro-ARP frameworks by introducing Ethical Drift and a structured system for preserving reflective judgment in AI-integrated environments. Situates the framework within contemporary AI ethics, governance, and human-AI interaction research.
Builds on: ARP, AIRP, and Micro-ARP (Foundational Edition, 2025)
Ethical Sufficiency in AI-Supported CareFrom Minimum Standards to the Preservation of Human Judgment in AI-Mediated Practice
Audience: Social workers, clinicians, educators, organizational leaders
Expands the original Ethical Sufficiency framework by introducing Ethical Drift and repositioning ethical sufficiency as an active condition that must be sustained through structured reflective engagement.
Builds on: Ethical Sufficiency in AI-Supported Care (Foundational Edition, 2025)
Human-in-the-Loop Is Not Enough:A Position Paper on AI Safeguards in Supportive Care
Audience: Clinicians, educators, agencies, AI designers, healthcare systems
A position paper explaining why human-in-the-loop is not a sufficient safeguard in AI-supported care systems and what must replace it.
Ethical Expansion Guard™Preventing Constructive Assumption Error in AI-Assisted Clinical Documentation
Audience: Clinicians, social workers, EHR vendors, AI designers, compliance teams
Introduces a design-level safeguard that enforces evidence-bound output in AI-assisted documentation, preventing AI systems from generating clinically plausible content that exceeds what was explicitly documented.
Artificial Empathy:Limits, Risks, and Governance in Human-Centered AI Systems
Audience: Compliance teams, legal counsel, AI governance groups, executive leadership, clinicians, educators
Examines how simulated empathy in AI systems is interpreted and relied upon in vulnerable contexts, proposing a governance framework for assessment, disclosure, containment, and human oversight.