With the introduction of AI features into Falcony, we maintain the same robust security standards and data protection measures that have always been in place. This article addresses key data privacy, security, and compliance questions for organizations evaluating or implementing Falcony's AI capabilities.
Core security principles
Infrastructure security: We use the same AWS infrastructure as before, ensuring no additional data exposure is introduced with AI features. All data continues to reside in the EU, maintaining our existing data residency standards.
Data processor relationships: The list of third-party data processors remains unchanged with AI implementation. We introduce AI summarisation as a new processing purpose directly using AWS Bedrock services within our established AWS partnership. This means no new external entities access your data.
No training on customer data: Customer data is never used for training AI models. Your organisational data remains proprietary and confidential. Our service provider (AWS) does not store any of the data used or created during AI summary generation.
No data sharing: Customer data is never shared with third-party providers - see the AWS Bedrock security FAQ for more details.
EU AI Act and regulatory compliance
Under EU AI Act, Falcony operates as an "AI Deployer of a limited risk system" while Amazon serves as the "AI Provider." What this means for your organisation:
Falcony maintains full responsibility for how AI is used with your data
Falcony marks all AI generated content as such
Amazon provides the underlying AI technology infrastructure
This classification ensures appropriate regulatory compliance for low-risk AI implementations
Data processing details
Current AI data processing
Observations and related details, for more details please see the corresponding help article: AI summary for observations.
Planned future data processing
Audits: Audit content and findings
Attachments: Document and image analysis capabilities
Granular control
Data processing is controlled by configurable feature flags at the organisational level. This provides granular control over AI feature usage:
Enable AI summaries for observations while keeping audits excluded
Customise AI feature deployment based on your organisation's requirements
Maintain full administrative control over which data types are processed
