Ethical AI practice for the helping professions.
Aluma provides the frameworks, governance, training, and tools that helping professionals actually need.
Founded by a licensed social worker · Research under peer review at Springer Nature
Choose Your Path
Aluma provides training, governance architecture, and ethical AI frameworks. Select the area that fits your role or organization.
Why Aluma Exists
AI is already in the helping professions — in documentation, client interactions, decision support, and daily workflows. But the professionals being asked to use these systems don't have tools designed for their context, training that builds ethical judgment instead of just compliance, frameworks that explain how ethics should operate inside technology, or governance structures that protect the people they serve.
Aluma exists to close that gap. Not with a single product or a policy document, but with an integrated ecosystem where ethical frameworks shape the tools, the tools reinforce the training, the training builds professional judgment, and governance holds it all accountable.

Founded by Ashleigh Gardner-Cormier, LMSW
Social Worker, Writer, Ethics-Focused Practitioner
With nearly two decades of experience across mental health, healthcare, legal, and hospice settings, Ashleigh brings real-world clinical and systems experience to every part of Aluma's work.
Aluma emerged from both professional practice and lived experience — shaped by a commitment to ethical sufficiency, boundary protection, and human-centered systems. Research is currently under peer review at a Springer Nature-affiliated journal.
From Research to Practice
Everything Aluma builds follows a single thread — from original research through to the tools and governance structures practitioners use every day.
Original research establishing Aluma's foundational thinking — underpinning the tools, learning environments, and governance models that follow.
Models like ARP, AIRP, and Micro-ARP that translate research into usable ethical structures guiding judgment, boundaries, and reflective decision-making.
Applications like Ask Aluma and Aluma Scribe that embody the frameworks in everyday use — supporting reflection without replacing human authority.
An applied ethical learning environment designed to deepen reasoning. Helps practitioners slow down, reflect, and strengthen judgment in real-world contexts.
Living references built collaboratively with your team — defining where AI assistance ends and human responsibility begins, with clear decision authority and escalation paths.
← Scroll to see all stages →