How We Use AI
Artificial intelligence is changing how organisations work, and we think it is important to be open about how we use it — and how we don’t.
This page explains our current approach to AI, the principles that guide our decisions, and the boundaries we have set to protect the people who use our services.
Our principles for using AI
We have established clear principles that guide every decision we make about AI:
- Human oversight always. AI does not make decisions on its own at neurobetter. A person reviews, checks and approves anything that AI has contributed to before it reaches our users.
- Transparency first. If AI has been involved in creating something, we will say so. We do not present AI-generated content as if a human wrote it from scratch.
- Never for clinical purposes. AI does not assess, diagnose, triage or provide therapeutic input to any of our users. This is a firm boundary.
- Privacy protected. We do not share personal user data with AI systems. When we use AI tools, we do so in ways that keep user information separate and protected.
- Regularly reviewed. Our approach to AI is not fixed. We review it regularly as the technology, regulation and evidence base evolve.
Why transparency matters
Many organisations use AI without telling their users. We think that is wrong — especially in mental health, where trust is everything. You deserve to know how the information you read was created.
Where we currently use AI
We use AI as a tool to support our team’s work in specific, bounded ways:
Content drafting and research support. AI helps us gather research, draft initial content outlines, and identify relevant evidence. Every piece of content is then written, reviewed, edited and approved by a person before publication.
Technical development. AI assists our developers with code writing, debugging and documentation. All code is reviewed by our team before it is used in our services.
Administrative tasks. AI helps with routine tasks like summarising documents, drafting communications, and organising information — always with human review.
Where we do not use AI
There are clear areas where we do not and will not use AI:
- Clinical or therapeutic interactions. AI does not interact with users in any clinical, counselling or therapeutic capacity.
- Content published without human review. No content reaches our website or our users without being reviewed and approved by a person.
- Decision-making about users. AI does not make decisions about individual users, their access to services, or their care.
- Processing personal data. We do not feed personal user data into AI systems for analysis or profiling.
AI is a tool, not a therapist
neurobetter will never use AI to replace human support, clinical judgement, or therapeutic relationships. If you use our services, you can be confident that the support and information you receive has been created and reviewed by people.
The AI tools we use
We primarily use AI models from Anthropic (Claude) for our content and development work. We chose Anthropic because of their public commitment to AI safety, their responsible development practices, and the quality of their tools.
When we use these tools, we do so through professional accounts with appropriate data handling agreements in place. We do not use free consumer AI tools for any work that touches our services or content.
How we evaluate new AI uses
Before we introduce any new use of AI, we consider:
- what benefit it provides and whether that benefit is genuine
- what risks it introduces, including risks to privacy, accuracy and trust
- whether human oversight is maintained throughout
- whether it aligns with our principles and values
- what our users would expect and whether they would be comfortable with it
If we are not confident that a proposed use meets these criteria, we do not proceed.
Evolving regulation
The UK Government has published guidance on AI governance, and the EU AI Act is shaping how organisations across Europe approach AI. We follow these developments closely and adapt our practices accordingly.
Looking ahead
AI is a fast-moving field, and our approach will continue to evolve. We are committed to:
- updating this page whenever our AI use changes significantly
- engaging with emerging regulation and best practice
- listening to our community about how they feel about AI use
- maintaining our core principle that AI supports human work — it does not replace it
If you have questions about how we use AI, we welcome your feedback at team@neurobetter.org.
Your trust matters
We understand that AI can feel uncertain or concerning, especially in the context of mental health. We are committed to being honest about what we use, why, and what safeguards are in place. If anything on this page raises questions, please get in touch.