Wanstor AI usage policy
Summary
Wanstor uses artificial intelligence tools internally to support the delivery of services. This includes the use of Microsoft 365 Copilot, which may process customer data in the course of normal service delivery. Some Copilot experiences use non-Microsoft language models, such as Anthropic Claude, which may involve AI processing outside the UK or EU under Microsoft’s contractual and security framework. This Addendum explains how and when AI tools are used, and how this fits within our existing security and data protection obligations.
Policy
1. Use of AI tools in service delivery
Wanstor uses AI tools internally to support the delivery of services. These tools are used to assist our teams with activities such as analysis, drafting, summarisation, automation, testing, and productivity tasks.
AI tools are used as a support mechanism for Wanstor personnel. Responsibility for decisions, outcomes, and advice remains with Wanstor and our staff.
2. Primary use of Microsoft 365 Copilot
Our primary AI tooling for work involving customer environments or customer data is Microsoft 365 Copilot, where it is available.
Microsoft 365 Copilot operates within the Microsoft 365 service boundary and is subject to Microsoft’s enterprise security, privacy, and compliance controls. Microsoft states that prompts and responses, and organisational data accessed through Copilot, are not used to train underlying foundation language models.
Where Wanstor uses Microsoft 365 Copilot in the course of delivering services, customer data may be processed within Wanstor’s Microsoft 365 tenant in line with Microsoft’s product terms and data protection commitments.
3. Use of Anthropic and other non-Microsoft models within Copilot
Microsoft 365 Copilot supports the use of multiple large language models behind the Copilot experience, including models provided by Anthropic, such as Claude, and other models that Microsoft may introduce in future.
Microsoft states that when Anthropic-backed models are used within certain Copilot experiences, including Copilot in Word, Excel, and PowerPoint, some AI processing occurs outside the Microsoft UK and EU data boundary. In these scenarios:
- Anthropic operates as a Microsoft sub-processor
- processing occurs within Microsoft’s Copilot orchestration and enterprise controls
- prompts and responses are not used to train Anthropic or other foundation models
Wanstor’s use of Copilot includes the use of these models where Microsoft selects or permits them for specific Copilot capabilities. As Microsoft expands or changes the models available within Copilot, Wanstor may use those capabilities as part of service delivery.
This means that, in limited circumstances, customer data processed via Copilot may be processed, but not stored, outside the UK or EU as part of Microsoft-managed AI execution.
4. Limited use of other AI tools
In addition to Microsoft 365 Copilot, Wanstor may use other AI tools for internal activities such as research, development, drafting, diagnostics, experimentation, or skills development.
Some of these tools may not operate under UK GDPR aligned contractual terms or UK and EU data location commitments.
Wanstor trains and instructs its staff to avoid submitting customer personal data, sensitive data, or confidential customer content into AI tools that are not covered by appropriate safeguards. Where AI assistance is required, staff are instructed to prefer Microsoft 365 Copilot or to work with redacted, anonymised, or synthetic data.
5. Customer responsibilities and shared context
AI tools operate within the permissions, access controls, and data governance that already exist in the underlying systems. For Microsoft 365 Copilot, this means Copilot can only access data that the user already has permission to access.
Customers remain responsible for configuring appropriate access controls, data classification, and governance within their own environments.
6. Information security queries
This Addendum is intended to provide transparency about how AI tools are used by Wanstor. It does not create customer approval rights or service restrictions.
Any information security, supplier assurance, or audit-related queries relating to AI use should be directed to:
infosec@wanstor.com
7. Relationship to the MSA
This Addendum supplements the underlying Master Services Agreement by explaining how AI tools are used in practice. It does not replace confidentiality, data protection, or security obligations set out in the MSA.