The Data Exists. The Problem Is Where It Lives.
When brands begin mapping their DPP readiness, they typically discover something both reassuring and frustrating: much of the data a Digital Product Passport requires already exists somewhere in their organization. Fibre composition is in the PLM system. Chemical compliance records are in a compliance management tool or shared drive. Supplier names are in the ERP. Certifications are in a folder managed by the sustainability team. Care instructions are on the care label itself.
The problem is not that the data does not exist. The problem is that it exists in fragments — scattered across disconnected systems, held in formats that cannot talk to each other, and managed by different teams with no shared data standard linking them together.
This is the legacy IT challenge for DPP compliance. And it is, for many brands, the single largest obstacle between their current state and a compliant, functioning Digital Product Passport.
How Brands Typically Hold Product Data Today
The typical mid-sized textile brand manages product data across a patchwork of systems that were each built to solve a specific problem — not to share data with each other:
Product Lifecycle Management (PLM)
PLM systems hold design and development data — material specifications, BOM (bill of materials), tech packs, supplier assignments. This is often the richest source of DPP-relevant product data in a brand's IT landscape. But PLM systems are typically optimized for the design and development phase. Once a product moves into production and commercial sale, PLM data is often locked in place and not updated — meaning the record in the PLM may not reflect changes made during sampling or production.
Enterprise Resource Planning (ERP)
ERP systems manage commercial and logistical data — purchase orders, inventory, supplier invoices, shipment records. They contain supplier names, order quantities, and production facility information that is directly relevant to DPP supply chain data requirements. But ERP data is structured for financial and operational purposes, not sustainability disclosure — it rarely holds the level of supply chain detail (facility names, production stage, country of processing) that DPP requires.
Compliance and Chemical Management Tools
REACH compliance documentation, SVHC substance declarations, and test reports may be managed in dedicated compliance tools, document management systems, or simply as PDFs in shared folders. This data is often maintained by a separate compliance or quality team with limited integration into the broader product data ecosystem.
Supplier Portals and Email
Much supply chain data — factory audit results, certifications, material declarations — is collected through supplier portals, spreadsheet templates sent by email, or manual questionnaires. The data arrives in inconsistent formats, is entered manually into internal systems (if it is entered at all), and is rarely linked at the individual product level.
Spreadsheets
A significant portion of sustainability and supply chain data in most brands lives in spreadsheets — maintained by individuals, structured inconsistently, and representing a single point of failure if the person who built the spreadsheet leaves the organization. Spreadsheet-based data is not queryable via API, is not structured to a machine-readable standard, and cannot be linked to individual product unit identifiers at the scale DPP requires.
The challenge is not a data shortage. It is a data architecture problem. The same information exists in multiple disconnected systems with no single source of truth, no consistent product identifier linking records across systems, and no API layer making the data accessible to external stakeholders.
What DPP Requires That Legacy Systems Cannot Provide
A compliant DPP does not just require that data exists — it requires that data be structured, linked, and accessible in specific ways that legacy systems were simply not designed to support:
A Single Unique Product Identifier Linking All Data
DPP requires every data record to be linked to a unique serialized product identifier — one that is consistent across all systems and serves as the key connecting physical product to digital record. Most legacy systems use their own internal identifiers (PLM style codes, ERP article numbers, compliance system references) that do not align with each other or with the GS1-based serialized GTIN the DPP requires. There is typically no master identifier that links a product's record across PLM, ERP, compliance, and sustainability systems.
Machine-Readable, Structured Data Formats
DPP data must be served in machine-readable formats — structured JSON or equivalent — that can be processed automatically by third-party systems including recycler platforms, customs tools, and consumer-facing DPP interfaces. Data held in PDFs, scanned documents, or unstructured text fields in legacy systems cannot be served in this format without significant transformation work.
Real-Time API Accessibility
When a product's QR code is scanned, the resolver routes the request to the brand's data endpoint, which must return the correct data in real time. Legacy systems — particularly PLM and ERP — are typically not designed to expose product data via external-facing APIs. They may support data exports or batch reporting, but not the kind of real-time, per-product-unit query that DPP infrastructure requires.
Controlled Multi-Stakeholder Access
DPP requires different data to be served to different stakeholders — consumers see one view, regulators see another, recyclers see a third. Legacy systems have no concept of this kind of differentiated external access. Their access models are designed for internal users, not for a multi-stakeholder ecosystem with defined, regulation-specified access rights.
Dynamic Data Updates
DPP data is not static — repair events, circularity updates, and certification changes must be reflected in the live record. Legacy systems that treat product data as fixed once production is complete have no mechanism for these lifecycle updates. And even if the system could accept updates, there is typically no defined process for how updates from third parties (repair operators, recyclers) would be received, validated, and applied.
The Integration Challenge
The path from fragmented legacy systems to a compliant DPP data infrastructure is fundamentally an integration challenge. The goal is not necessarily to replace existing systems — it is to create a layer that connects them, normalizes their data, and exposes the result through a standards-compliant API.
This integration layer must:
- Pull product data from PLM, ERP, compliance tools, and supplier data sources
- Normalize that data against a consistent product identifier and a standardized data schema
- Apply data quality validation to catch gaps, inconsistencies, and errors before they propagate into published DPP records
- Serve the normalized, validated data via API to the resolver and downstream stakeholders
- Accept and process dynamic updates from circular economy operators
- Maintain version history and audit trails for all data changes
Building this layer in-house is a significant software engineering project. For most textile brands, integrating with a specialist DPP platform that handles normalization, validation, and API serving — while connecting to existing systems via standard integration protocols — is more practical than building from scratch.
The Data Quality Problem Hidden Inside Legacy Systems
Beyond structural incompatibility, legacy systems often harbor a data quality problem that only becomes visible when brands attempt to consolidate data for DPP purposes. Common issues include:
- Inconsistent supplier naming. The same factory may appear under different names, abbreviations, or spellings across PLM, ERP, and compliance records — making it impossible to automatically link records without manual reconciliation.
- Missing component-level composition data. PLM systems often record overall garment composition (the figure that appears on the care label) but not the component-level breakdown (body fabric, lining, trim, thread) that DPP requires.
- Outdated or incomplete country of origin data. Country of origin as recorded in the ERP may reflect the country of final assembly for customs purposes, not the country of origin of raw materials — which are two different fields in a DPP.
- Certification data not linked to products. Certification records often exist at the supplier or facility level, not linked to specific products or product batches — making it impossible to automatically determine which certifications apply to which DPP record.
- Compliance documents in non-machine-readable formats. Test reports, SVHC declarations, and REACH compliance documentation held as scanned PDFs cannot be parsed automatically and must be re-entered in structured format.
Identifying and resolving these data quality issues is time-consuming work that cannot be automated away. It requires systematic data auditing and, in many cases, going back to suppliers to request corrected or more detailed information. Starting this process early — well before 2028 enforcement — is the only way to have clean, consolidated data ready in time.
A Practical Path Forward
Brands facing the legacy IT challenge should approach it in stages rather than attempting a comprehensive system overhaul:
Step 1: Map Your Current Data Landscape
Before making any technology decisions, document where each category of DPP-required data currently lives — which system holds it, in what format, at what level of granularity, and who owns it. This map will reveal both the sources available to draw from and the gaps that need to be filled through supplier outreach or new data collection processes.
Step 2: Establish a Common Product Identifier
Identify or create a master product identifier that can be used consistently across all internal systems. This does not have to be the final serialized GTIN immediately — it can be an internal code that later maps to the GS1-compliant serialized identifier. The goal is to have a consistent key that allows data from PLM, ERP, compliance, and sustainability sources to be linked to the same product record.
Step 3: Prioritize Static, Objective Data First
Focus initial data consolidation efforts on the objective datapoints that are ready to collect now — manufacturing locations, material composition, supplier identifiers, certifications, care instructions. These do not depend on methodology decisions still in progress and can be structured and validated while the regulatory picture for methodology-dependent data continues to develop.
Step 4: Select a DPP Platform That Integrates with Your Existing Systems
When evaluating DPP service providers, prioritize integration capability over feature sets. A platform that can connect to your existing PLM and ERP via standard APIs — pulling data automatically rather than requiring manual re-entry — will reduce the ongoing operational burden significantly. Platforms that require all data to be entered manually into a new system simply add another silo to the landscape.
Step 5: Plan Data Governance Before Data Migration
Before moving data into a DPP platform, define ownership: who is responsible for each data category? Who validates updates? What happens when a supplier provides data that conflicts with what the brand's internal systems show? Data governance decisions made before migration prevent the quality problems that arise when multiple teams assume someone else is managing data accuracy.
Frequently Asked Questions
Do we need to replace our PLM or ERP system to comply with DPP?
Not necessarily. The goal is not to replace existing systems but to build a data integration and API layer that connects them and exposes a standards-compliant DPP data endpoint. Existing systems can continue to serve their primary functions while contributing data to the DPP infrastructure. That said, if existing systems are so outdated that they cannot support basic API integration, a broader modernization may be necessary — but this is a separate business decision from DPP compliance itself.
How long does it take to consolidate product data from multiple legacy systems?
It depends heavily on the number of systems involved, the volume of products in scope, and the quality of existing data. Brands that have started this process report that the initial data audit and gap identification takes several months, and that resolving data quality issues — particularly supplier data gaps — takes significantly longer. A realistic estimate for a mid-sized brand moving from fragmented legacy data to a structured, validated DPP-ready dataset is 12 to 24 months of focused effort.
What is the minimum viable starting point for DPP data infrastructure?
The minimum viable starting point is a structured, queryable database of product records — one record per product type initially, with the architecture to support serialization at the unit level — linked to a consistent product identifier and exposing a basic API. This does not need to be a fully featured DPP platform from day one. It needs to be a clean, structured data foundation that a compliant DPP system can be built on top of as requirements are confirmed.
Ready to start your DPP journey?
Talk to our team about preparing your textile products for EU Digital Product Passport requirements.