Traditionally, the transfer pricing industry focused on laws, regulations, principles, and international tax aspects. A typical transfer pricing expert was primarily thinking about which transfer pricing methodology to use, how to structure certain TP arrangements, or how to avoid unnecessary tax risks.
However, the landscape is changing rapidly with the introduction of new international tax initiatives like the OECD’s Pillar 2. Processes and procedures have become more important, and concepts of transfer pricing management and operational transfer pricing are now taking the lead in discussions. Additionally, the rise of Tax Administration 3.0 emphasizes the need for efficient data management in response to real-time reporting and data analytics requirements. The connected crucial aspect is data – the living blood of transfer pricing of the future.
Transfer pricing and data: interrelations
If you think about transfer pricing, the first thing that comes to mind is Article 9 of the Model Tax Convention, OECD Guidelines, and transfer pricing methods. Talking more formally, this is a set of rules and principles that act as decision-making algorithms. And these algorithms are applied to actual facts and circumstances of business reality.
Let’s take a simple example: we know that transactions involving limited (or low) risk distributors would usually be tested via the transactional net margin method. Of course, there are certain conditions that you should consider, and there are exceptions, but in general – this is a rule of thumb that works in most cases.
What’s more difficult in this case is not the algorithm but the facts and circumstances we need to identify and check to come to the right conclusion. As a minimum, you would need to:
- Identify entities involved in the transaction
- Get basic business and financial facts about the transaction (amounts, types of goods traded, payment terms, etc.)
- Find and structure the information about other fundamental factors – 5 comparability factors in the OECD language, including functional analysis.
These components represent the transfer pricing data – quantitative and qualitative facts collected together for transfer pricing analysis.
Data gathering and data structuring – the most difficult part of transfer pricing work
While understanding and applying transfer pricing methodology is not easy, it is still a relatively straightforward exercise for most companies and their business models. At the same time, gathering and structuring transfer pricing data, even for simple transactions, can be a non-trivial task. Here are the reasons:
- Transfer pricing quantitative data is spread across various ERP systems, Excel files, and databases. Most companies did not have the privilege to deploy their systems and processes with transfer pricing requirements in mind. Digitalization in general accounting and finance started before transfer pricing became crucially important globally. As a result, transfer pricing professionals are required to gather data from disparate data sources and systems not designed with transfer pricing needs in mind. In addition, usually much support is needed from non-transfer pricing specialists in the organisation, namely accounting & finance as well as IT departments, making the data-gathering process cumbersome and slow. This challenge is further exacerbated when trying to meet new compliance requirements (like Pillar 2), which demand a higher level of accuracy and detail in data reporting.
- Transfer pricing quantitative data is not structured and mapped for transfer pricing purposes. For example, while it would be ideal to know which invoice relates to a specific transfer pricing methodology or even which intragroup agreement was used as a legal basis for a transaction, most ERP systems of multinational companies don’t have this information. Another complexity is translating tax and legal terms into accounting data language – for example, which accounting transactions are relevant for transfer pricing?
- Building custom integrations with ERPs and other systems for transfer pricing purposes rarely pays back. While some companies go through a process of building custom modules to extract system data from ERPs and other systems, it is often the case that the game is not worth the candle. Most transfer pricing data is required annually (e.g., for compliance purposes), and building complex and expensive data integrations is a too big investment for such a use case. The potential for custom integrations for operational transfer pricing purposes is bigger, though it still requires a good business case (which often is limited in reality).
- Transfer pricing qualitative data is spread around the organization and is often just in…people’s heads. While quantitative data is difficult and problematic due to the above reasons, the most challenging issue is often qualitative data. First, it’s often spread around the organization and is not properly documented. For example, the details of actual business models, such as risk allocation and market factors, are known by business managers. Still, it primarily comes from their expertise and knowledge of daily work, which is unavoidably subjective. Another classic example is people often thinking their function or contribution to the business model is the most important for the company (a form of Spotlight Effect), as well as transfer pricing professionals asking leading questions in functional interviews and creating a form of Observer-Expectancy Effect. All these biases lead to inconsistent, inaccurate, and subjective data points.
As a result, data gathering and data structuring almost always become the most time-consuming and problematic part of the transfer pricing project:
- Transfer pricing professionals must organize in-depth discussions with finance and accounting teams to understand data availability and issue detailed requests. Relying on standard, generic questions is not an option, as it frustrates data providers and/or produces inaccurate results.
- Provided quantitative data is usually not standard and takes the form of a random Excel file. Data providers often ignore structured requests and find a way to modify even protected sheets.
- At the same time, functional and industry interviews for qualitative data gathering require strong sponsorship from company management, buy-in from stakeholders, coordination of dozens of meetings, and many hours of preparation. Still, the results can be incomplete or inaccurate.
While we believe that the best way to help transfer pricing professionals deal with qualitative data is to provide them with a clear list of required data points, give examples, and help with interviews’ best practices (that’s what our applications do), our approach to quantitative data is different. That’s why we developed Aibidia Data Studio, which is an integral part of the Aibidia Platform.
Our approach to transfer pricing data: Aibidia Data Studio
Imagine you receive an Excel file with ten thousand rows and dozens of columns. The financial controller tells you: “That’s the data about related party transactions that you asked for!”. Our experience shows that, as a minimum, you want to:
- Identify relevant lines, and exclude the irrelevant entries that are not transactions in the transfer pricing sense
- Characterize transactions into types and introduce high-level tags and characteristics (e.g., sale of products – electronics), as well as map transactions to transfer pricing models and intercompany agreements (e.g., limited risk distribution – LRD Global Agreement)
- Aggregate and pair transactions
- Double-check with the financial controller if mappings produce meaningful results.
As an output, you would have a properly structured data set (hopefully dozens or hundreds of transactions, not thousands) ready for further transfer pricing analysis.
And this is precisely what Aibidia Data Studio does:
- The solution ingests raw transactional data, such as ERP extracts, and creates tables based on the imported data.
- Next, you can define specific transformation and mapping rules allowing Aibidia Data Studio to clean, characterize, map, and aggregate transactions.
- After this, you can define the output data set you would like to receive. It can be in a format that other Aibidia applications, such as TPDoc, can easily process or that you need for analytics.
- In addition, it supports decentralized workflows where you can grant users or user groups (e.g., financial controllers) access to only certain data sets, so you create workflows combining automated and manual steps to ensure the know-how of all stakeholders is incorporated in the outcome.
You set up Data Studio once and don’t need to repeat the whole process next year. Instead, you just upload the new raw ERP data for another period and get the output.
This already makes the life of transfer pricing professionals and their stakeholders much easier. But even more potential lies in the operational transfer pricing area, where processes are usually executed monthly or quarterly, i.e., 4-12x times more often than in compliance. This means that efficiencies gained with Data Studio can be scaled to another level. What’s more, Aibidia Data Studio supports:
- Automated data validation – ensuring data makes sense, for example, no negative numbers where only positive values should be added
- Audit trail – tracking what was done with the data and who made changes
- Integration with other Aibidia applications – for example, transactional data automatically flows into TPDoc, and no additional steps are required to have structured data on the platform.
Aibidia Data Studio has the potential to revolutionize how transfer pricing experts work with data and be the enabler of transfer pricing digitalization for many companies. We invite you to book the demo and see how it works.
Read more about Aibidia’s Data Studio.