AI Act: i 5 obblighi chiave per SaaS italiani
Timeline, penalties and operational steps. What to do if you use OpenAI APIs or proprietary models — and why risk classification is the decision you can't postpone.
Read the analysis →
Specialized legal support for every phase of your digital product. Three disciplines, one principle: regulatory compliance as a functional requirement.
We build together the legal architecture of your data operations. Data Schema review, Data Retention policy and consent management integrated in the development cycle — before data complexity becomes compliance debt.
Learn more →
I guide you through the intersection of AI Act, Machinery Regulation and Cyber Resilience Act: from AI risk classification to CE marking, through FRIA and the protection of your models' intellectual property.
Learn more →
NIS2 compliance, incident management and personal liability of management. We build a security framework your Board understands and your team implements. Without slowing down delivery.
Learn more →Regulatory compliance is not an obstacle: it's a product quality element, to be integrated from the design stage.
Compliance enters the development process as a functional requirement with verifiable acceptance criteria.
A regulatory framework built on your company's actual risk profile, not on generic checklists. From seed startup to scale-up with 50 sub-processors and ISO 27001 audit: the structure adapts without complete rewrites.
I work with GitHub, Linear, Notion, Slack. Regulatory requirements are Issues, not PDF reports. Compliance becomes something the team monitors and improves, not something managed only by the legal department.
Being compliant with European standards means being ready for any client, in any market.
Timeline, penalties and operational steps. What to do if you use OpenAI APIs or proprietary models — and why risk classification is the decision you can't postpone.
Read the analysis →The new personal liability of administrators for cyber risk management. How to build a governance framework that protects the company — and management.
Read the analysis →Data Privacy Framework and Transfer Impact Assessment. How to legally manage AWS, Google Cloud and Stripe without blocking operations.
Read the analysis →GDPR, Data Act, Data Governance Act, ePrivacy: your product operates at the intersection of all four. They are not separate compliance boxes to tick on a checklist — they are a single regulatory system governing every European data flow, personal and non-personal. Knowing one without the others is the fastest way to be compliant on paper and exposed in reality.
Your product produces data. It collects, processes, transfers, shares it with sub-processors, analyses it to improve features, sends it to AWS, Stripe, Segment, OpenAI. Every step is regulated, and the perimeter has expanded radically in recent years.
GDPR is the foundation. But from 2023 onwards, the European legislator has built an ecosystem of new regulations around it, extending governance to non-personal data, industrial data, and electronic communications. Ignoring the Data Act because "we don't handle personal IoT data" or the DGA because "we're not a data marketplace" is a mistake you pay for during enterprise due diligence or the first contract with a banking client.
Our approach treats data governance as an architectural layer of the product: regulatory requirements enter the data model, integration pipeline and supplier contracts — not as constraints added after the fact, but as functional specifications from the first user story.
The foundation of the ecosystem. Governs every processing of personal data: collection, storage, processing, transfer. Data subject rights (access, erasure, portability, objection), controller accountability with processing records, legal basis for each processing, notification obligations in case of data breach within 72h.
La protezione dei dati deve essere integrata nel sistema fin dalla progettazione e le impostazioni predefinite devono essere le più rispettose della privacy: non sono raccomandazioni — sono obblighi con impatto diretto sull'architettura del prodotto. Questo significa che la retention policy si decide quando si progetta il DB schema, non quando arriva il Garante.
Watch the pseudonymisation boundary: pseudonymised data in your analytics or ML system is not automatically outside the GDPR. If your system can link it back to a specific individual — even indirectly — it remains personal data. Before excluding it from compliance scope, a specific technical assessment is required.
For SaaS startups: GDPR is not just a privacy policy and cookie banner — it's the consent architecture, the retention policy in the DB schema, the DPAs with every sub-processor, the Data Breach Response Plan tested before you need it.
The Data Act changes the game for connected hardware manufacturers: your customers now have the right to take data generated by their device's use and give it to a direct competitor. This is not a theoretical scenario — it's an enforceable right from September 2025. How do you manage it contractually? How does it impact the architecture of your data value-added service?
For those processing or commercialising third-party IoT data: the new access and sharing rules change your contractual position relative to the hardware manufacturers. Verify before the next renegotiation.
For cloud providers: the Data Act introduces portability and switching obligations. For enterprise clients using your SaaS on a single cloud: the lock-in clauses you have today may not hold up against the new portability standards.
For SaaS in the healthcare sector: the European Health Data Space introduces specific secondary portability and research access obligations. If you handle health data, the perimeter has already expanded.
If your product aggregates datasets from multiple sources, facilitates data sharing between companies, or acts as a data marketplace, the DGA classifies you as a "data intermediation service provider" with registration, activity separation and neutrality requirements. This is not an intuitive category — many startups that fall within it don't know it yet.
The DGA and the Data Act don't overlap: the DGA regulates those who facilitate sharing (intermediaries), the Data Act regulates rights over specific data categories (those generated by connected products). A company building a B2B service on IoT data must understand both, because non-compliance risks come from different directions.
The framework the Italian DPA applies in its inspections on cookies and behavioural tracking. The proposed ePrivacy Regulation has been stalled since 2017, but the DPA isn't waiting: it has already issued significant fines for cookie walls, dark patterns and non-compliant CMP configurations. Don't wait for the final regulation to fix your banner.
Data governance becomes a functional requirement of the software. I work directly with your technical team — not downstream of their work, but from the first user story. Regulatory requirements enter the DB schema before the first migration, the DPAs before the sub-processor integration, the CMP before go-live.
The result is a product that doesn't accumulate compliance debt: every new feature is born already assessed for its regulatory impact, every new supplier is qualified before integration, every extra-EU transfer has its legal basis documented.
Formal appointment as external Data Protection Officer under art. 37 GDPR. Official point of contact with the Data Protection Authority, ongoing compliance monitoring, management of data subject requests (DSAR: access, erasure, portability, objection), team training, compliance metrics reporting. Scalable model: retainer calibrated to your company's stage, expandable without hiring constraints. Not a generalist DPO — a DPO with direct expertise in SaaS architectures, API economy and cloud providers.
Technical-legal analysis of the product architecture against GDPR and Data Act requirements: mapping all data flows (from user input to S3 bucket, through every microservice and third-party API call), identifying legal bases for each processing, analysing retention policies implemented in the DB schema, verifying data minimisation. Includes verifying the personal/pseudonymised data boundary for analytics and ML systems: data that doesn't directly identify the user is not automatically outside the GDPR.
Data Protection Impact Assessment for high-risk processing: large-scale behavioural profiling, systematic monitoring, processing of special categories (health, biometric, judicial data). The DPIA is built on the system's real architecture, with a threat model of risks to data subjects' rights, analysis of implemented technical mitigation measures (encryption, pseudonymisation, access control) and proportionality assessment. A document that holds up in the event of an inspection or challenge.
Structured incident response procedure: risk level assessment for data subjects' rights, notification to the DPA within 72h with complete incident documentation, communication to data subjects when risk is high, remediation plan and post-incident review. The service includes an emergency channel with response SLA in the first hours — because the quality of the notification and the outcome of any investigation are decided in the first 24h from incident identification.
Mapping and qualification of all sub-processors: AWS, GCP, Stripe, HubSpot, Intercom, Mixpanel, Segment, Sentry, OpenAI, Anthropic and every other processor acting on your behalf. Negotiation and drafting of Data Processing Agreements. Standard Contractual Clauses for extra-EU transfers. Transfer Impact Assessment for flows to the USA, UK and third countries. The processing record is built and maintained as a living document, updated with every new integration.
Consent strategy implementation compliant with the Italian DPA 2021 guidelines: Consent Management Platform configuration, privacy-safe default verification, cookie banner and cookie wall audit, third-party tag analysis (Google Tag Manager, Meta Pixel, LinkedIn Insight Tag, Hotjar). For every third-party script: legal basis, cookie category, consent implications. Consent collected must be documented, granular and revocable as easily as it was given.
Where do you stand on the European Data Economy?
A 30-minute call to map your data flows against GDPR, Data Act and DGA: which regulations apply to your product, what the real exposures are, and what concrete steps to take in the next 90 days. No commitment.
If you build robots, autonomous systems or machines with integrated AI, you are operating at the intersection of three overlapping European regulations: the AI Act, the Machinery Regulation and — from 2027 — the Cyber Resilience Act. Knowing just one is not enough. And almost no one knows all three.
Until 2024, industrial machine manufacturers had a single reference framework: the Machinery Directive with its CE marking. Today that framework is no longer sufficient. If the machine integrates AI, a second compliance layer is required — the AI Act. If it has updatable software components, from December 2027 the Cyber Resilience Act will also apply.
The practical result is a multiple compliance regime: the machine must be safe as hardware (Machinery Regulation, CE marking), compliant as an AI system (risk classification, technical documentation, human oversight, FRIA) and (from 2027) cyber-resilient as a product with digital elements (security by design, vulnerability disclosure, patch support for the entire commercial lifetime).
Those who don't build the technical documentation in an integrated way from the first release end up doing a complete rewrite before the enterprise launch. The cost is not linear: it is much higher if addressed after the fact.
The new Product Liability Directive adds a further risk layer: a bug in the ML model that causes an incident is no longer just a technical issue, but a potential civil liability for the manufacturer. Legal due diligence on software and AI is not optional — it is part of the product.
Four risk categories. For industrial machines, the critical point is Annex III: AI systems used as safety components are classified as high-risk, with technical documentation obligations, EU database registration, robustness testing, mandatory human oversight and FRIA. Incorrect downward classification is the most penalised type of non-compliance.
If you integrate a general-purpose AI model (an LLM, for example) in your machine for voice interfaces or decision support: verify that the model meets the transparency obligations on training data required for these models. If it is not compliant, liability may fall on you as the deployer.
Una macchina con componente AI ad alto rischio deve soddisfare contemporaneamente: i requisiti essenziali di sicurezza del Regolamento Macchine, inclusi quelli per macchine con funzioni di apprendimento automatico (Allegato I, sezione 1.1.9); e i requisiti AI Act: documentazione tecnica Allegato IV, log automatici verificabili, misure di supervisione umana, post-market monitoring. Le due conformità condividono alcune evidenze documentali ma richiedono strutture distinte — costruirle separatamente duplica il lavoro senza aggiungere valore.
Replaces the Machinery Directive with an updated framework that includes machines with machine-learning capabilities. The new essential safety requirements for "self-evolving" machines require that safety behaviours remain predictable and verifiable even after training and model update phases. Every project starting today should account for this: it is far less costly than doing a complete revision in 18 months.
From December 2027, all products with digital elements (including robots and autonomous systems with software components) must meet cybersecurity-by-design requirements: security must be built into the product from the design stage, not added later. Concrete obligations: structured vulnerability management, security patch support for the entire commercial lifetime of the product, ENISA notification within 24h of actively exploited vulnerabilities. For those building robots with remotely updatable firmware: every software component falls within the CRA perimeter. Building the technical documentation today in an integrated way with the Machinery Regulation and the AI Act (instead of three separate processes) is the most efficient choice.
Key deadlines
| Date | Event |
|---|---|
| Feb 2025 | AI Act: absolute prohibitions in force (real-time biometric identification, subliminal manipulation, social scoring). |
| Aug 2026 | AI Act: full application for high-risk systems (Annex III). FRIA, technical documentation, EU registration. Penalties: up to €15M or 3% of turnover. |
| Jan 2027 | Machinery Regulation 2023/1230 replaces Machinery Directive 2006/42/CE. |
| Dec 2027 | Cyber Resilience Act: full application. Security by design, vulnerability disclosure, mandatory patch support. |
Analysis of the AI system against the four risk categories of the regulation, with focus on Annex III for safety components in machinery. Mapping of the declared intended purpose and the reasonably foreseeable misuse — because the AI Act assesses risk on actual use cases, not just declared ones. Verification of the interaction between AI Act classification and classification under the Machinery Regulation. Analysis of the applicable regime for any general-purpose models integrated into the system.
Output: classification memo with a defensible position before the market surveillance authority, map of applicable obligations, and compliance roadmap prioritised for the next 18 months.
Management of technical documentation for multi-regime systems: Machinery Regulation, AI Act, Cyber Resilience Act. Building integrated documentation that satisfies all frameworks without duplication — same documentary evidence, distinct structures where required. Defining the compliance path: self-declaration vs. involvement of a Notified Body for high-risk categories.
A critical issue in building technical documentation is the gap between the obligation to control and the actual power to control. The AI Act imposes human oversight requirements (Art. 14), but an AI system may be designed in a way that makes such oversight difficult or impossible to exercise in practice. A machine processing hundreds of signals per second in industrial environments cannot be "supervised" by a human operator in the traditional sense. The technical documentation must address this explicitly: define what type of supervision is technically exercisable on that specific system, under what operational conditions, and at what frequency. Failing to document this analysis risks not satisfying the regulatory requirement.
Mandatory fundamental rights impact assessment for high-risk AI systems. For industrial machinery: analysis of the rights of persons who interact with or are affected by the machine (operators, workers, third parties in the operational environment). Analysis of systematic biases in training data for safety-critical functions. Document designed to be updated with every significant change to the system.
Documentation strategy to demonstrate due diligence in the design and monitoring of the AI system. Contractual structure along the supply chain: indemnity clauses between the machine manufacturer, AI component supplier, and distributor. Analysis of insurance coverage against the new liability regime for products with software and AI components.
Two structural dynamics make civil liability in AI systems more complex than in traditional products. The first is the many hands problem: the development chain involves dozens of actors — no one has full control, no one has visibility over every component. In the event of an incident, this fragmentation creates a liability vacuum that the contractual structure must address explicitly. The second is the risk of the moral crumple zone: in a system where an operator is nominally responsible for oversight but in practice cannot exercise real control over the AI system's output, that operator absorbs the legal burden in case of an incident. The technical documentation must precisely define the effective perimeter of human control exercisable in the real operational context.
Analysis of training dataset composition: origin, licences, presence of copyright-protected or trade-secret data. Review of open-source licences for base models and usage restrictions for commercial and high-risk applications. Protection of proprietary model intellectual property: trade secret, IP assignment in contracts with employees and contractors. Compliance with training data transparency obligations for general-purpose models.
Analysis of the product against the Cyber Resilience Act requirements in preparation for the December 2027 deadline: attack surface mapping, security-by-design architecture review, vulnerability disclosure policy analysis, patch release procedure verification. For robot manufacturers: the CRA overlaps with the Machinery Regulation's post-market surveillance obligations and the AI Act's monitoring requirements — an integrated system is more efficient than three separate processes.
Design of the in-production monitoring system required by the AI Act: performance KPIs, alert thresholds for performance degradation, model update procedure that does not trigger a new conformity assessment. Incident reporting: serious malfunctions must be notified to the national market surveillance authority. Integration with the Machinery Regulation's post-market surveillance obligations and the CRA's vulnerability disclosure requirements.
Industrial robot and cobot manufacturers — Collaborative robots with adaptive functions, pick-and-place systems with computer vision, robotic arms with ML-based trajectory planning. The AI component is almost certainly a safety component under the AI Act: high risk, Annex III.
Autonomous system and AMR manufacturers — Autonomous Mobile Robots and AGVs with ML-based navigation for industrial or logistics environments. Movement autonomy in environments shared with people triggers specific requirements under both the Machinery Regulation and the AI Act.
Robotics and deep tech startups — Teams developing robotic platforms for agriculture, construction, healthcare, defence, or space. The multi-framework regulatory regime impacts system design from the start: retrofitting an architecture for compliance is far more costly than building it compliant from the first release.
AI system integrators in existing machinery — Those adding an AI layer (computer vision, anomaly detection, predictive maintenance) to already-certified machines. Integrating a high-risk AI component may require a new conformity assessment of the entire machine, and from 2027 a review of CRA documentation if the product has remotely updatable software components.
Is your robot or AI system correctly classified under all applicable frameworks?
A classification session to analyse your product against the AI Act, Machinery Regulation, and Cyber Resilience Act: risk category, applicable obligations, interaction between the three frameworks, and operational roadmap. 60 minutes, defensible position as output.
NIS2 is not just an IT problem. It is a legal obligation with direct personal liability for directors. Delegating to the technical team is not enough, and ignoring it exposes management to personal sanctions regardless of who handles operational security.
Personal liability of directors
Since NIS2 transposition in Italy (D.Lgs. 138/2024), management bodies approve security measures and monitor their implementation. Non-compliance can result in personal liability for directors — including temporary suspension from office for essential entities.
The scope has tripled. NIS1 covered 7 sectors. NIS2 covers 18, including for the first time managed service providers, cloud providers, software vendors, and ICT suppliers in general. If your main client is a bank, hospital, utility, or public administration, you may be subject to NIS2 supply chain obligations even without falling directly within the scope, because essential entities are required to manage the risk of their ICT suppliers.
Liability has become personal. Management bodies approve security measures and monitor their implementation. Non-compliance can result in personal liability for directors, including temporary suspension from office for essential entities. This is no longer just a technical matter: it is a Board responsibility.
Notifications are faster and more structured. Pre-alert to the ACN within 24h of incident identification, detailed notification within 72h, final report within 1 month. Each phase has specific content requirements. Facing an incident without documented and tested procedures means managing the technical crisis and drafting regulatory documents under pressure at the same time.
NIS2 assesses security in a 360-degree view. Not just technical cybersecurity: also physical controls on systems (unauthorised access to servers) and resilience against power outages or environmental events. A framework that covers only software is incomplete — and will be seen as such by the ACN.
Are you a fintech, bank, or insurer?
Your reference framework is not NIS2 but DORA (in force since January 2025): specific ICT risk management requirements, operational resilience testing, formalised management of critical ICT suppliers. Where DORA and NIS2 overlap, DORA prevails. But if you provide ICT services to third parties, you may be subject to both, and managing them as two separate silos is more costly than building an integrated framework.
Do you develop software sold as a product?
From December 2027 the Cyber Resilience Act imposes cybersecurity-by-design requirements: security built into the product from design, structured vulnerability management, patch support throughout the commercial life. For startups selling software as a licence or embedded product: the CRA introduces a compliance regime similar to CE marking.
Initial notification to the ACN: minimum information about the incident, estimated impact, indication of whether a deliberate attack is suspected.
Initial cause analysis, measures taken, indicators of compromise. In parallel, notification to the Data Protection Authority if personal data is involved.
Full root-cause analysis, remediation measures, quantified actual impact, lessons learned.
Verification of applicability under D.Lgs. 138/2024: analysis of the operational sector, size thresholds (50 employees or €10M turnover) and entity type (essential or important). Mapping of applicable technical and organisational obligations: technical, physical, and organisational security measures, ICT supply chain security, incident management, business continuity, encryption, multi-factor authentication. The analysis evaluates compliance against recognised international standards (ISO/IEC 27001, ISO/IEC 27005) — the most defensible approach in the event of an ACN inspection.
Output: documented applicability assessment, obligations map with operational priorities, roadmap with concrete milestones.
Incident response plan compliant with NIS2 obligations: operational definition of "significant incident", detection and internal escalation procedures, ACN notification chain (24h pre-alert → 72h detailed notification → 1 month final report), documentation templates for each phase. Includes proactive monitoring of known vulnerabilities in systems in use, because an incident often originates from an already-known unpatched vulnerability. Coordination with GDPR procedures for incidents that also involve a data breach: the two notifications have similar deadlines but different content and must be managed in parallel. The plan is tested with simulated scenarios — an untested plan does not work when needed.
Assessment of critical ICT suppliers and drafting of security contractual clauses: minimum technical obligations (patching policy, incident notification, audit rights), security SLAs with measurable metrics, audit and penetration testing rights, incident notification obligations towards the client. The supply chain is the most common attack vector, and NIS2 makes it an explicit contractual responsibility.
For banks, fintechs, insurers, and investment managers: compliance profile analysis against DORA. Review of ICT risk governance framework. Assessment of critical ICT suppliers according to European supervisory authority guidelines. Structuring DORA incident reporting procedures and coordination with NIS2 obligations. For fintechs: verification of direct applicability and impact on contracts with DORA-subject clients.
Training session for CTOs, CEOs, and board members: overview of obligations and management personal liability, full scope of required security (technical, physical, organisational), what the Board must approve and how to document it, practical incident scenarios with simulation of the decision-making process in the first 24h. Documented and certified training — proof of compliance with the training obligation in the event of an inspection.
Legal risk profile analysis: mapping of potential sanctions against current non-compliance level, assessment of management personal liability exposure, analysis of enterprise client contracts containing security compliance clauses. For startups providing ICT services to NIS2 entities: assessment of the impact of supply chain security obligations. Includes a preliminary CRA scope analysis for those developing and selling software as a product.
Legal support in managing mandatory notifications to the ACN during and after an incident: drafting the pre-alert within the first 24h, managing the detailed notification within 72h, preparing the final report within 1 month. Coordinated management with the Data Protection Authority when the incident also involves personal data. Support in any subsequent investigation phases.
Sector — Annex I (essential entities): energy, transport, banking, financial market infrastructure, healthcare, drinking water, wastewater, digital infrastructure, B2B ICT service management, central government, space. Annex II (important entities): postal and courier services, waste management, chemical production, food production, medical/electronic/machinery/vehicle manufacturing, digital providers (marketplaces, search engines, social networks).
Size — 50+ employees or annual turnover > €10M. The conditions are alternative: one is sufficient.
Supply chain — Even if you are not directly within scope, your NIS2-subject clients are required to manage the risk of critical ICT suppliers. If you provide software, infrastructure, or managed services to banks, hospitals, utilities, or public administrations, expect NIS2-aligned contractual clauses in upcoming contracts.
| Entity type | Maximum penalty | Management liability |
|---|---|---|
| Essential Entities | €10M or 2% global turnover | Temporary suspension from office |
| Important Entities | €7M or 1.4% global turnover | Public notification of the incident |
| DORA (financial entities) | Up to 1% of average daily turnover per day of violation | Personal liability of senior management |
Does NIS2 apply to your company?
One hour of assessment to verify applicability, map the relevant obligations, and define where to start in concrete terms. Output: documented applicability assessment and operational priorities.
We believe regulatory compliance is not a brake on innovation — it is its most solid foundation. Law is the code of the digital ecosystem.
“It's not about chasing compliance but anticipating it. Designing processes by measuring, from the very start, the exact risk exposure is the first step toward the goal.”
— Avv. Matteo PompilioData protection decisions are architecture decisions. They belong to the system design phase, not the final review phase. When retention policy enters the DB schema from the first migration, when consent is designed into the UX flow before go-live, when DPAs are negotiated before integrating a sub-processor — compliance stops being an added layer and becomes part of the structure. A product built this way is more robust, more auditable, and easier to evolve over time.
Digital law cannot be applied at a distance from the technology it regulates. A contractual clause written without knowing the architecture of the system it describes is structurally weak. A risk analysis conducted without reading the code (or at least the technical documentation) is unreliable. The studio's method requires legal and technical work to run in parallel, with the same tools and in the same conversation — not downstream of decisions already made, but in the moment they are made.
Regulatory compliance has no finish line. Products evolve, architectures change, regulations update, markets expand. A framework built as a periodic task (to check once a year, or before an audit) cannot withstand this dynamic. The studio's method treats compliance as a continuous process integrated into the organisation's lifecycle: every new feature is evaluated against its regulatory impact, every new supplier is qualified before integration, every incident is managed with already-tested procedures. Compliance that accumulates over time is the kind that does not break at the worst moment.
Regulatory requirements enter the process as acceptance criteria, not as final reviews. Every sprint has its compliance review component, proportional to the risk of the features developed.
I don't deliver documents — I work with the team. I participate in technical conversations, understand the system architecture, and know the stack before writing a single line of legal documentation.
The regulatory framework grows with the company. We build the right structure for the current phase, with the flexibility to adapt it for the next round, the first enterprise client, the expansion into a new geographic market.
No inaccessible legal language. Risks are communicated clearly, with quantified business impact. Priorities are shared, deadlines are real, work is measurable.
Let's build something compliant — and lasting.
The first conversation is free and without obligation. We analyse together the regulatory surface of your product and define concrete steps for the coming weeks.
Let's start →Last updated: 31 January 2025
Your privacy matters. This policy clearly explains how we handle your data when you visit this site. Click on sections to expand them.
I am, Avv. Matteo Pompilio, the data controller (technically "Titolare del Trattamento" under Italian law) of the data collected through this site.
📍 Office: Via Superga 195 - 76125, Trani (BT)
The site automatically collects some technical information:
Version 2.0 - Last updated: 31 January 2025
If you write to me via email or contact form, I collect what you voluntarily provide:
Why do I do this? To respond to your request for information or advice.
Legal basis: Pre-contractual measures (Art. 6.1.b GDPR)
How long do I retain them? 24 months from the request, or until you ask me to delete them.
Your data is NOT sold or shared for commercial purposes.
They may only be communicated to:
⚠️ Extra-EU Transfers
Some services (Netlify, Google) also have servers in the USA. Transfers take place in compliance with the EU-US Data Privacy Framework and EU-approved Standard Contractual Clauses.
The GDPR gives you full control over your data. You can:
🔍 Accedere
Ask me for a copy of all the data I hold about you
✏️ Rectify
Correct incorrect or incomplete data
🗑️ Erase
Have your data erased ("right to be forgotten")
⏸️ Restrict
Temporarily block the use of your data
📦 Port
Receive your data in a format readable by other platforms
🚫 Object
Say "no" to processing for legitimate reasons
How to exercise your rights?
Simply write to me at: matteopompilio@studiolegalepompilio.it
I will respond within 30 days (maximum under GDPR).
💡 You can also lodge a complaint with the Data Protection Authority if you believe your rights have been violated.
This policy may be updated over time to reflect regulatory changes or changes in site practices. You will always find the latest version here, with the update date at the top.
Version history
v2.0 - 31 Jan 2025
Full GDPR policy. Interactive accordion, clear language, technical cookie detail.
v1.0 - 01 Jan 2025
Initial minimum notice.