In a changing digital world, old business models are being disrupted and the winners will be those who can adapt quickly. We are in the transition from web 2 into web 3. Web 1, 2 or 3, it doesn’t really matter, because it is all data, right? Then pay attention because it is not that simple.
“it doesn’t really matter, because it is all data, right? “
Web 2 is company centric, with a need for data availability, quality and interoperability for monetization. Topics most companies struggle with. Web 3 is user centric, meaning users own and control (their) data, with the potential share, collaborate and monetize data.
The last decade, the focus was on web 2. You can recognize this in the high volume of cookies and apps to capture consumer data. The growth of central data platforms that store, harmonize and share data. And the rise of the function of data scientist.
How will the web 3 differ? Web3 is powered by a decentral network of peers that enable data sharing between users and applications. This allows for a more easy, transparent, and secure exchange of data. Exactly the struggles of web 2.
This data exchange is gaining traction in the mainstream, supported by the now acceleration increase of blockchain and on the horizon, the (meta)verse (The Metaverse: new frontier that re-imagines retail & health. And data re-imagines the Metaverse. – D8A directors). Decentral data sharing in web 3 allows users to interact directly without a central intermediary company. It also allows for an even faster growth of data, which will require adjustments in the analysis of data as well as an opportunity to increase development of AI solutions. And – important – it gives the user control where and how to share which data with who. In theory, the ultimate data privacy.
Web 3 does require rethinking data security, good to keep that top of mind.
Blockchain (or smart contracts) is a good example of how data provenance (what is the source of the data, initial data quality and which data processing activities have taken place. Provenance is important for companies and consumers alike) can be controlled. By capturing every step of the way in smart contracts, every step is retraceable. Relevant for companies, e.g. for fraud detection and prevention. Or being able to build a – decentral – large volume of scientific data, one of the key challenges today.
And for users, blockchain is relevant to understand how their data is used and if it is high in demand, thereby creating monetization possibilities for users. Increasing the incentive for data sharing. Decentralized data democratises data access, enabling companies AND users to create, deploy, and use specific instruments that were before restricted Here the shift from company-centric to user-centric becomes clear. Data can become your income.
Either in web 2 or 3, data is the foundational layer of innovation. Web 3 extends that with building meaningful relationships between companies and users (cut-out the middle men), improved quality of data, elevated possibilities and value of digital transactions and increased data privacy.
As a user, what do you need to have in place? Create a thorough understanding of your rights as owner of your own personal data. Share your data were and when you want, but act wisely. Understand which data can have value where and monetize accordingly. Have your data available in a personal data vault in accordance with Self Sovereign Identity principles. In short, become a data expert!.
There has never been a better time to create impact with data & analytics. More and more data is available, computing power is increasing fast and analytical techniques are getting mature. Being data driven is the talk of the town, for sure it is part of the strategy of your organization. In the last decade, most companies have invested in data & analytics initiatives to enhance efficiency, increase sales and comply to regulation. Yet, these initiatives have not yet resulted into full business value. Organisations are getting ready for the next wave; getting value out of data & analytics products.
1. Climbing fast; the importance of data in value creation
Data is an asset and has (future) economic benefits. During the last years, the volume, complexity and richness of data has grown exponentially — mainly driven by e-commerce and Internet of Things or sensor data-and is expected to continue to do so (see also: McKinsey). In fact, so much of the world is instrumented that it is difficult to actually avoid generating data. We have entered a new era in which the physical world has become increasingly connected to the digital world. Data is generated by everything from cameras and traffic sensors to heart rate monitors, enabling richer insights into human and “thing” behavior.
Add to that the current growth in analytical power (e.g. analytics, machine learning, artificial intelligence). And the confluence of data, analytical and computational power today is setting the set for the next wave of digital disruption and data driven potential for growth.
“… the confluence of data, analytical and computational power today is setting the set for the next wave of digital disruption …”
This growth has a number of preconditions. Of course, organisations need to recognize that data is an asset. It also requires that required data is correct, available and (re-)usable. And potential revenue generation need to be qualified, e.g. through data marketplaces, data-as-a-service integration, digitization of customer interactions, product development, cost reduction, optimize operations and improving productivity.
In the last ten years data & analytics initiatives within organisations mainly focused on:
→ Controlled data & analytics, e.g. data organisation, data governance, privacy or trusted analytics;
→ Centralized source of available data, e.g. data platform or data engineering;
→ Insights value chain, e.g. deriving insights through use-case based machine learning, process mining or BI self-service by a team of analytics experts.
Not all initiatives bring the desired results. For example, deriving new insights is often considered as innovative, but any executive will recognize the sprawl of self-generated BI reports each claiming their own version of the truth, making it complex and time consuming to turn insights in to company steering. And although these initiatives are by-and-large based on business cases such as efficient reporting, comply to regulations or end-of-life for legacy systems, controlling and centralizing data & analytics, they are in fact supporting hygiene factors. And there is the trend that algorithms seem to be commoditizing, e.g. Google and Amazon are providing free access to their analytics libraries. In the end, this trend will transform any insights value chain into becoming a hygiene factor as well.
In any case, the results of most current data & analytics initiatives are not a breakthrough innovation or digital disruption.
2. Approach your data with a product mindset
So, while most data & analytics efforts are — still — performed to facilitate and improve the insights value chain, the real innovation is productization of data & analytics. Organisations need to look beyond their team of skilled data & analytics professionals with governed data sources, latest analytics tools and technologies if they want to leverage data to improve and increase revenue. To actively contribute to this, organization should start viewing data & analytics through a product development lens.
This means that we need to transform from data & analytics (point) solutions mainly focused on internal value towards the creation of full-fledged data & analytics products. Productization involves abstracting and standardizing the underlying principles, algorithms and code of successful point solutions until they can be used to solve an array of similar and relevant business problems. In the end, this should lead to a robust portfolio of data & analytics products.
To enable this organisations need to have the following foundation in place:
Bridge the gap between data & analytics and business – In many organisations, data & analytics and business execution are totally separate. The business lacks understanding of what is possible and therefore will ask for everything, without prioritization and lacking a requests funnel. This leads to the development of data & analytics point “solutions” without full business potential. Move beyond the current hype of ‘data literacy’ and actually involve relevant (business) stakeholders into data & analytics. Embrace change. And be practical by starting with data quality, ownership or relevant use-cases to improve daily operations through analytics, BI or robotics. Expand from there and be persistent. Truly and sustainably embedding data & analytics in an organisation is a long-term process.
Anchor data & analytics competences at executive level – Business impact from data-derived insights only happens when data & analytics is implemented deep within and consistently throughout the organization. This requires commitment, ownership, sponsorship and direction of a leader with the authority and sufficient understanding of data & analytics and its potential.
Understand potentials— the value of data & analytics depend on uniqueness and end uses How to monetize the potential of data & analytics? Its value comes down to how unique it is, how it will be used, and by whom. Understanding the value is a tricky proposition, particularly since organizations cannot nail down the value until they are able to clearly specify its uses, either immediate or potential. Data & analytics may yield nothing, or it may yield the key to launching a new product line or making a scientific breakthrough. It might affect only a small percentage of a company’s revenue today, but it can be a key driver of growth in the future. General rule of thumb is that uniqueness of data will increase its value, so find that (hiddenn) gem. Where possible, join unique sets to further enhance the value and potential of data. Product development takes an investment in time, people and technology. Set up a — technical — test-and-learn lab environment where pilots and beta-version products can be developed and by which the value can be further explored and understood. Include domain experts, data scientists, data experts. Capture client- and end-user needs in this lab environment and transform it into solutions and products. Identify quick wins for early adopter clients, to learn and develop how products work in a client environment. Set up a cooperation with sales departments and potential partners. Standardise, improve and scale products .
Take sufficient time — be lean where possible – Many organisations have invested resources and investments in data & analytics initiatives, e.g. hiring and/or educating data scientists, data lake implementations and data ownership. They are eager to finally monetize the data so that it indeed is ‘the new oil’. But if products are unclear or without market relevance, there is the risk of missing targets and being overtaken by competitors. At the same time, be opportunistic for quick results, perform pilots as much as possible to create an early adopters client base.
3. The coming wave: data & analytics product opportunities
Potential data driven product opportunities are well researched, identified and described. Think about e.g. IoT-based analytics for leasing companies and car insurers, real-time supply and demand matching for automotive, logistics and smart cities, personalized e-commerce and media, data integration between banks and B2B customers and data driven life sciences discovery. Besides, resolving the above mentioned foundation, a detailed approach on to realise these opportunities is less clearly defined. This paragraph contains the 5 main steps that all organisations should follow:
Step 1: Conceptualize the product
Identify a data & analytics product that meets the market needs within the lab environment. To identify relevant opportunities, include product expert, business groups (e.g. super users, sales and marketing) and -potential — clients. The process involves product definition and identification of data required for the product (which should include sourcing data creatively). Organisations with unique internal data have an increased opportunity to create highly valuable products with a good competitive edge. E.g. a bank with an agricultural background can use unique data which are highly sought after by other financial institutes. And a supply chain company can enhance their planning software with integrated robotics to increase efficiency for their clients, enhancing churn and sales opportunities. Take the uniqueness of available data into account early on in the process. Determine the market position and potential business model for the prototype. There are three main prototype categories, i.e. a data-as-a-service product, algorithms code performing robotics and analytics & BI and software code containing interactive insights based on analytics algorithms.
Step 2: Acquire and build
Data as foundation for new products is traditionally captured internally and externally to support daily operations and reporting & insights. Given the vast amounts of data being available from commercial and public sources, extend the purpose of data acquisition for productization. Acquired data needs to be correct, timely, understandable, with a clear provenance — including restrictions for usage & storage and in accordance with regulatory compliance such as privacy laws.
To design and build correct algorithms supporting robotics and/or analytics products, an analytics pipeline needs to be established. In this pipeline correctness, reusability, bias, quality and provenance of algorithms and quality of code will be managed. Integrating CI/CD (continuous integration / continuous development) supports a lean and agile analytics pipeline with fast testing of the prototype value. Data and code need to be stored in an agile, scalable and secured environment. And finally, data & analytics products gain value from the context of their use, user interface and/or ease of use. So, incorporate UX design into the product development approach.
Step 3: Refine and validate
Once data is identified and (algorithm and software) code and user interfaces are designed and build, they need to be enriched, refined and validated.
Step 4: Readiness
Store data in an advanced environment where it can be integrated, queried, processed and searched. This makes data sufficiently, fast and reliable available for data-as-a-service products. Ensure that this is supported by a solid and robust data architecture. Distribution channels of algorithms and software code can be numerous, e.g. in a cloud environment where it can integrate with web- and mobile solutions. Custom made build into client environments. Or through pre-defined (API) connections made to measure for clients.
Step 5: Market and AI feedback
The competitive nature of the information product space, availability of new data sources and demand for timely decision support require an ongoing emphasis on innovation, pricing and monitoring product usage. Adding this step at this stage of the analytics-based data product development process is consistent with the iterative nature of product development in a “lean startup” context. Once again, the evolution of new technologies has provided a mechanism for facilitating a feedback and information extraction process from the marketplace
Brief recap: companies are eager to utilize the new data-oil. Not every organisation is able to do that successfully. By taking a comprehensive approach, persevering through sufficient knowledge building on ALL organisational level and starting small based on a step by step approach, you can be successful with data products and services.
In the digital world, there are two main flavours, those with extensive data and those that require extensive data.
– In this article, we leave out the data-native (Big Tech) companies -.
Those with extensive data, are in fact the (international) corporations with trusted brands, mature system landscapes and long long-lasting relationships with customers and partners. They can build upon large quantities of (historical) data, consistently used for existing processes and products. These could do much more with their data, maneuvering (the Gambit) real value out of their data.
Most corporations already invested in structural advantages for a competitive data edge: a supporting platform infrastructure, data quality monitoring, established data science teams and a data steward / data scientist attitude. For a maximal return on those investments, companies need to go the extra mile.
A strategy for data
The most common pitfall of a data strategy is that it becomes an overview of big words only, with a (too!) high focus on technology and analytics. Yet technology should be an enabler. And analytics is just a manifestation. Don’t gamble with data (products), a good data strategy starts with a clear vision, related to market, technology and (regulatory) developments. Include a target operating model to achieve the strategy. But most of all, include on the value of data. Determine use-case types that will create most value. Large corporations have an unparalleled knowledge of industry and markets and are uniquely positioned to oversee this. Of course, there are value-cases for efficiency gains and productivity improvements. Limiting to these obvious values, tends to close doors on new opportunities. Companies must have a clear ambition pathway to data-driven revenue. This new revenue can include rewiring customer interaction, creating a completely new product or business and stepping into new markets.
In practice, data driven revenues proof to be more difficult than imagined. The effort to introduce new products within new markets combined with uncertain results make companies hesitant. Without a solid and funded ambition and a defined risk appetite, this can result into only minimal innovations, such as adding data features (apps!). Compared to data-native companies, this minimal innovation sometimes seems small potatoes. A clear data strategy gives companies mature guidance for innovation KPIs, investments, risks, and market opportunities. The data strategy will help to build success and develop new services, products and even ventures.
Data equals assets
In general, there are two flavours when it comes to data within companies. Companies have less data than they realize. Or companies have more data than they realize and have an under-utilization of the data, due to insufficient awareness of its value. Understanding the value of your data is based on 5 pillars:
Historical data cannot be easily replicated, years of data about customers, productions, operations, financial performance, sales, maintenance, and IP are enormously valuable. Such historical data is beneficial for increasing operational efficiency, building new data products and growing customer intimacy. Although Big Tech companies have been around for some years already, they can not compete with dedicated historical data sets. If the (meta) data is of good quality, the value increases even more. Mapping where this data resides gives an up-to-date overview of relevant data throughout the system landscape.
Corporations are highly aware of the relevance of privacy regulations and have adopted data privacy measures and controls into their data operations. This way, the data that is available is for sure in accordance with (global) data privacy legislation.
Being part of a – traditional – chain with external suppliers and receivers (e.g. supplying materials to a manufacture who sells it to a retailer) can leverage the data into multiple views on e.g. sourcing and warehouse management. Established corporations are uniquely situated to build data-chains. Having a trusted brand creates traction for cooperation and partnerships to capture, integrate, store, refine and offer data & insights to existing and new markets.
“Understanding the value of data means requires real entrepreneurship”
Large corporations can enhance existing & new products with data, e.g., through sensor data. Big Tech companies are doing that now mostly for software products. More traditional companies are particularly capable to do this for hardware products. This way of thinking is still very much underdeveloped, because it is difficult to introduce a new product or even worse, enter a new market with a new product. Yet, it is also the ultimate opportunity! Build data entrepreneurship, by starting small while understanding the full potential of data. Examples of small starts are identifying if a data model can be IP – e.g., when it is part of a larger hardware product. In real life, starting small often means focusing on a solution that is close to home, e.g., joining multiple data sets into one and/or build dashboard, which can be offered to customers as extended service. These are often chosen because of feasibility reasons. From a data product perspective, don’t consider such an approach as not small; consider it as not even starting. Companies that do not progress beyond these products should at least have a simultaneous experimental track, building and failing new products and services for lessons learned what works and what doesn’t. Understanding the value of data requires entrepreneurship (see also the example of Rolls Roycehere.)
Large and established corporations are the epiphany of entrepreneurship. It is at their very core. Yet, often not enough for data. Data can be so alien to them that experimenting for value is hesitant or not happening. And this is where start-up companies are not lacking. They might not have the large historical data sets, trusted data chains or easy connections with available hardware products. They do have the entrepreneurial spirit and are highly aware of the value of data. And have the capability to experiment and become successful with new products.
Becoming data entrepreneurial means knowing which data you have, understanding the (potential) value and daring to look beyond the obvious.
Avoid mistakes. How data dependent is the newest digital development?
The Metaverse, generated in science fiction and frequently applied to gaming platforms is trending in e-commerce and increasingly in healthcare.
Simply stated, the Metaverse is a connection between the physical and virtual worlds and is seen as the successor to the mobile internet.
Purchasing a digital product in one ecosystem in the Metaverse (e.g., Facebook) allows you to use it in another (like TikTok). Or buy a physical product, which includes a digital twin — a digital representation as well as a statistic — for your online persona. And vice versa, the digital twin could be used to increase sale at a physical location, if a physical product is not available at a shop, the digital twin can be shown as example. Want to model how a car would behave with the same conditions as in the physical world your are in right now (weather, population, other vehicles on the road) then you can in the Metaverse. Or just google Fortnite’s Ariana Grande concert in the Metaverse.
Meta-commerce is you like, beyond e-commerce. Metaverse is also beyond virtual reality. It is a hybrid of VR, AR, mixed reality and can interact with real life.
So interoperability between eco-systems is key. This is how the timestamps of blockchain can show its worth beyond crypto! The same goes for data. To understand the value of good data, you need to think beyond the structured data that often comes to mind when we talk about data, AI and innovations.
The Metaverse is all about unstructured data or files; e.g., images, videos, music, SEO. Often a neglected area when it comes to data quality concepts such as accuracy (high, medium, low quality), timeliness, versioning (which originates from archiving principles and is now becoming directly related to the core business & product life cycle management), format and completeness. Each file needs to have sufficient data quality rules, definitions and other meta data to enable the above mentioned interoperability. It needs to be totally clear that the digital twin you’re being is from this season or last season for instance. And don’t forget hygiene factors such as ownership (who owns the digital twin? Who can re-sell it?), customization possibilities, portability, sharing agreements, security and most of all privacy. Hot topic, privacy and The Metaverse. Being in accordance with legislation is key in a highly digital world.
From an analytics perspective, the Metaverse is similar to AR &VR. It needs high quality training data. Which means data sets needs to be accurate and fit for use, removing bias and including good data labeling — based on standard classifications.
The Metaverse within the healthcare sector seems a logical next move. Here the ownership, portability and privacy are even more significant. Further increasing the value of good data quality, governed by a fitting regime.
In short, the Metaverse is the upcoming opportunity to increase the value of good data. And for businesses to become further data driven.
Enabling or blocking? Sovereignty of personal data.
Within the digital world, individuals are mostly viewed as — potential — consumers (obviously already a high share) or patients (currently growing share). The data of individuals needs to comply to the regulations within the country or region where the data is collected, i.e., it needs to fit with privacy and security.
Companies are building views on individuals, based from the name, address, email etc, which have been provided through every registration to an online service. As well as online behaviour, e.g., through tracking cookies. These centralised views or centralised identities are stored within silo-based platforms. Neither personal data or individual behaviour are well portable. This means that your digital identity exists in many small pieces with several companies knowing different information about you. This also means that you have to create a unique password for every profile you make, which can be cumbersome, and many tend to use the same password more than once. All of this creates security risks, since your personal data is being stored and managed by many entities and because a password breach might give access to several of your accounts.
An attempt to address these issues is federated identities. Individual identities are managed in a company or government centralized system. The system then distributes the data from the individual to a digital service. Examples where this is in use is within banks, insurers, retail and health. A federated identity enables easier digital activities through a single-sign-on solution However, a federated identity is still silo-based, since it only can be used with web services that accept this solution.
“………That’s right, SSI sets data ownership at the individual level.”
A next generation of identity solutions that is currently being developed and taken into use is self-sovereign identities (SSI). This type of digital identity is a user-centric identity solution that allows you to be in control of your data and only share the strictly relevant information. An example would a situation where you need to prove that you are of age. With an SSI you can document that you are over 18, without disclosing your exact age. Or documenting that you have received a specific vaccine, without disclosing information about all the vaccines you have ever gotten or other sensitive health data. Other examples are sharing that you have graduated to your — future — employer, your medical record with a hospital and your bank account with a store. In your own personal vault if you like (also: a ‘holder’ or ‘wallet’), you own and manage your data. That’s right, SSI sets data ownership at the individual level. Data ownership would resolve a large topic, that often proofs to be a blocker for companies to fulfill their digital ambitions. From this vault you decide to which companies & organisation you want to share your personal data to be defined per specific purpose. For this purpose, personal data needs to be classified (e.g., in accordance with privacy & security regulations) which data is open for all, which is private and which is secure data. The vault provider needs to have good technical solutions (e.g., with verifiers and encryption), a sufficient governance regime and controls in place to support this.
SSI will mean that individuals need to understand what ownership comprises of, what potential risks are and what good practices are to share data. Data literacy should be extended from mostly companies to more individuals. And companies should prevent technical, legal, ethical, fairness and security pitfalls (see also: 10 principles for SSI), e.g, for transparency for systems & algorithms as well as data monetization.
What is data monetization? According to McKinsey, it is the process of using data to increase revenue, which the highest performing and fastest growing companies have adopted and made an important part of their strategy. Internal or direct methods including using data to make measurable business performance improvements and informed decision making. External or direct methods include data sharing to gain beneficial terms or conditions from business partners, information bartering, selling data outright, or offering information products and services (Definition of Data Monetization — IT Glossary | Gartner).
How to deploy a data monetization strategy
Companies that innovate through data monetization recognize this, monetization can be tricky. Get it right, and you have happy customers and users who are willing to pay for your product. But mis-prioritize and your audience numbers quickly drop, along with your revenue. Building data monetization based on the principles of ‘trusted data’ ( mitigates the risk of mis-prioritisations).
There is no clear-cut answer how a data-driven product generates revenue, or when that is appropriate. And there will, of course, be some products that never monetize.
Having a strategy will deliver guidance. A data monetization strategy is, simply put, a plan to generate revenue for data-driven products. Like with any plan, it guides and brings structure. It is not something that is fixed — it should be flexible enough to develop with the product, the market the product exists in and its users. Goals can and will change over time, and so strategies need to evolve to continuously achieve the goals they’re designed to target. Data products can be based on loyalty & subscription models or a on-time purchase model. It is important to understand at the beginning of the strategy which model(s) the data can leverage to create focus and scalable results.
Data monetization strategy must be built upon the following pillars:
* Understanding how data can be converted into value (see below) and the associated opportunities and challenges of data-based value creation;
* Strategic insights into improving and preparing data to support monetization;
* Strategic insights in the potential value, markets and ecosystems.
Opportunities for monetization
Data driven business models help to understand how data can discover new opportunities. This can be focused on value for efficiency (reducing costs and risks), value for legislation (comply with relevant regulations) and value by maximizingprofits, by increasing impact on customers, partnerships and markets. This can include embedding data models, metadata and analytics into products and services. Data monetization needs to be scalable, flexible and user friendly, thereby providing advantages for the company and its customers.
Indirect monetization includes data-drive optimization. This involves analyzing your data to gather insights in opportunities to improve business performance. Analytics can identify how to communicate best with customers and understand customer behavior to drive sales. Analytics can also highlight where and how to save costs, avoid risk and streamline operations and supply chains.
“Having a full understanding of monetization possibilities will help to keep an open mind.”
Direct monetization of data is where things get interesting. It involves selling direct access to data, e.g. to third parties or consumers. This can be in raw form (heavily anonymised due to privacy regulations), aggregated, metadata only or transformed into analysis and insights.
This is the most direct data monetization method. Data is sold directly to customers (or shared with partners for a mutual benefit). The data is e.g., aggregated and/or anonymised, to be fully in accordance with legislation. And to enable trusted data. Buyers mine the data for insights, including combining it with their own data. Or use it for AI solutions within a software. Ecosystem play is the newest area for Data-as-a-service.
This applies analytics to (combined) internal and external data. It focuses on the insights created using data, rather than the data itself. Either the insights are sold directly or provided as e.g. analytics enabled apps.
This is a more flexible type of data monetization. The data ecosystem provides highly versatile, scalable and shareable data and/or analytics, when needed in real-time. Standardized exchanges and federated data management enable using data from any source and any format.
This is the most advanced and exciting way of monetizing data. Analytics-as-a-service seamlessly integrates features such as dashboards, reports and visualization to new and existing applications. This opens up new revenue streams and provides powerful competitive advantage.
Having a full understanding of monetization possibilities will help to keep an open mind. Where many companies are focusing on analytics products & services, there are more opportunities! Always stay within legal & ethical boundaries, but explore all opportunity formats to grasp new markets.
How the Legal, Economical, Technology and Scientific reflex impact data innovations.
Much has been said about proper data ownership. Many companies struggle to have successful data quality, data monetization and even advanced analytics, when ownership is unclear. Or not pro-actively practiced. In short: a data owner is accountable for correct and sufficient availability of good and trusted data quality. That accountability can only be applied if it is applied at the right organisational level, i.e. on the executive level. Any company that tries to divers data ownership to the tactical level (e.g. a data team lead) or operational level (e.g., a data steward) will not reach their data driven goals & objectives.
For those organisations without proper data ownership with a clear strategy, defined & monitored KPIs and supporting governance structure, there usually is only 1 road ahead.Employees on the organisational level will continuously firefight data issues, while e.g., AI solutions, data innovations and compliance goals are not met.
In highly sensitive organisational politics environments, substitute solutions for ownership are often sought. You can identify them as the L.E.T.S reflex:
Solutions with a high focus on Legislation (e.g., privacy, financial or health regulations);
Solutions with a high focus on Economical benefits (e.g., monetizing insights, increasing operational effectiveness through robotics);
Solutions with a high focus on Technology (e.g., a data platform/eco- systems, AI & Machine Learning, datawarehouses, BI tooling);
Solutions with a high focus on Science (e.g., researching data or analytics solutions, whitepapers, subsidized research).
Data driven companies will recognize at least one of these reflexes. Data innovations is complex, due to many factors. Which makes owning up to a solution for this complexity, a challenge. It is often easier to focus on one or multiple reflexes. Yet none of these reflexes can be seen as a substitute for data ownership.
It is time to increase acknowledgement of the importance of a chief data officer.
As companies move towards working data-driven, monetizing data in new and enhanced services and products is essential. Traditionally heavy regulated industries, e.g. financial and health, first focused on bringing their data in control. Their efforts concentrated mainly around data quality management, data privacy, data governance and E2E trusted data lineage. These efforts are often led — or owned — by a Chief Data Officer (CDO).
In this article, we advocate to shift or extend this focus of the chief data officer towards data incontrol AND data inuse.
CDO’s define and communicate the companies vision on data management and data use. Through this vision, the CDO gives direction, guidance, advocates for change and sets priorities for running projects. Most companies CDO’s have to some extent achieved this for data management. The extended focus of Chief Data Officers, which we advocate for in this article, contains standard processes for the design, prototyping, development, productizing and use of data & insights products & services. Furthermore, it is the CDO who defines a standard set of technology to be used to support these processes and create these solutions. Where needed, this is based on the data management foundation as implemented by the CDO in previous years.
The Chief Data Officer ideally combines business expertise, technology background and analytics/BI. Extended by a common commercial sense, understanding of production processes and knowledge of relevant 3rd party partners to cooperate with. Organisations without an ‘extended CDO’ will experience difficulties and potential delays in reaching their data-driven goals — in accordance with new developments in the market. Without strategic guidance and steering, there is an increased risk that departments and units will define their own standard processes, set of technology and data-driven products and services. Making it harder to leverage pre-existing data foundations as well as cross-unit collaboration to enable effective market penetrations. Teams will struggle to escalate and address growing concerns as sufficient C-level representation is missing.
Concluding, companies benefit from a Chief Data Officer with a focus on data in control and data in use. Top-down ownership and alignment of data initiatives, standardisation of processes and data tooling and a clear escalation path for growing concerns are necessary to succeed as a data-driven company.
Marketeers and dedicated advertising benefit from good data quality
Google announced its intentions to kill off the tracking cookies (so called 3rd party cookies) within its Chrome browser. Cookies which advertisers use to track users around the web and target them with dedicated ads. Google is not the only major player altering the digital ad landscape. Apple has already made changes to restrict 3rd -party cookies, along with changes to mobile identifiers and email permissions. Big Tech altering 3rd party cookies is caused by the need to be respectful of the growing data privacy consciousness. Most consumers don’t like the feeling of being tracked across the internet (70% of U.S. adults want data regulation reform and 63% of internet users indicate that companies should delete their online data completely).
For most marketeers, this paradigm change presents huge challenges to enable customer acquisition by tracking users and targeting them with dedicated digital advertising.
On the other hand, 3rd party cookies are inherently problematic, from limited targeting capabilities, inaccurate attribution to the personalization & privacy paradox. Their loss presents an opportunity to provide a smaller group of high-value customers with higher-caliber and increasingly personalized experiences. In other words, losing these cookies might become a blessing in disguise.
Confronting data acquisition challenges in a cookie-less future
For all the shortcomings of 3rd party cookies, the marketing industry does not yet have a perfect answer for how to acquire customers without them. Marketeers are waking up to the impactfull change they are facing. One potential answer to the loss of 3rd party cookies can be that they will be replaced with 1st & 2nd* party data, i.e., gathering data shared directly by the customer, such as an email address, phone numbers and customer authenticators (see below). This data can become the mutual currency for the advertising business. First party data can be hard to obtain, you need to “earn” it, including solutions on how to gain good quality 1st party data.
Some solutions focus on technology, e.g.,Google’s Federated Learning of Cohorts (FLoC). A type of web tracking that groups people into “cohorts” based on their browsing history for interest-based advertising. Other technology solutions include building a 1st and 2nd party data* pool, i.e., a Customer Data Platform (CDP). CDPs are built as complete data solution across multiple sources. By integrating all customer data into individual profiles, a full view of customers can be profiled. Another solution are private identity graphs that hold all the identifiers that correlate with individuals. Private identity graphs can unify digital and offline first-party data to inform the single customer view and manage the changes that occur over time (LINK?). this helps companies to generate consistent, persistent 360-degree view of individuals and their relationship with the company, e.g., per product brand. All to enable stronger relationships with new and existing customers.
Earning good quality data will increase the need for standardized and good quality customer journeys. And therefore, the need for standardized and good quality data. Where previously, design and data quality were not closely connected, the vanishing 3rd party cookies now acts as catalyst to integrate both.
Data quality is usually an unknown phenomenon for most designers**, design companies, front- & back-end software developers and marketeers. It requires a combined understanding of multiple domains, i.e., the user interface where data will be captured, the underlying processes which the captured data will facilitate, data storage & database structures and marketing (analyses) purposes.
Finding the expert that has all this combined knowledge is like finding a real gem. If you do, handle with care! It will be more likely that all domains will need at least an understanding how their domain enhances and impacts the other domains.
For the (UI/UX) designer:
Have a good knowledge of data quality rule types. What is the difference between a format & accuracy type? Is timeliness of data relevant? What are pitfalls for data quality rules? How to integrate multiple purposes (e.g. processes, data integration & analytics) into a dedicated data quality rule.
For product owners:
Ensure that expertise of data entry and how data is used within processes at a granular level (i.e., on data field level). Onboard a so-called data steward who can facilitate the correct input for data quality. Let the data steward cooperate with front-end developers and designers.
Keep your data fresh. Data doesn’t last forever. Make sure data stewards support data updates and cleansing.
Data stewards should work with designers and front-end developers to determine which fields are considered as critical. These fields should be governed by a strict regime, e.g., for the quality and timeliness of data as well as for access to the data and usage purposes.
Personal authentication is a separate topic that needs to be addressed as such. Relying on big tech firms as Facebook or Google can seem an easy solution, however increases the risk of being dependent on an external party. Yet authentication needs to be earned to build authentic customer relationships. When customers give a company a verifiable durable pieces of identity data, they are considered authenticated (e.g. signing up for a newsletter or new account via email address). This will be a new way of working for most companies. Therefore, data stewards need to up their game and not only know existing processes but extend their view, understanding and knowledge towards new developments.
Data stewards must align with the Data Privacy Officer on how to capture, store and process data. When it comes to privacy, compliance and ethics, you can never play it too safe.
For data storage & databases:
Ensure that data architecture (or at least a business analyst) is involved in the design process. This is sometimes resolved by the back-end developer (who cannot work without aligning with the architect office on data integration, models for databases and data definitions).
If standardized data models and/or data definitions reside within the organization, this should be part of the database development. Refer to authoritative source systems where possible.
If the application is made via low-code, standardization of existing data models/architecture, data definitions and data quality rules is often part of the approach. Yet, data quality checks should always take place as separate activity.
Understand how customer journeys can facilitate 1st and 2nd party cookies. Determine which data is needed for insights. Gather insights requirements and work together with the data steward to define data quality rules that facilitate your insights. Now that the 3rd party source is limited, the value of the customer journey for marketing increases!
Privacy is one of the catalysts to make 3rd party cookies disappear. This requires a new approach for acquiring personal data for marketing and ad targeting. New developments that require new skills and more importantly, a new cooperation between existing domains. Companies that enable this, will lead this new way of working.
* Data from 1st party cookies = occur only within a company’s own domain. & data from 2nd party cookies = ca be used within and outside a companies’ own domain. This article takes mostly 1st party data into account. For 2nd party data, you can further investigate e.g., ‘data co-ops’, complementary companies that share data. Each member of the co-op should relate to the others in a meaningful way because outside of your own web domain, you’ll be able to reach customers only on your partner sites — and this reflects on your brand.
** Of course, there are designers work who work with data enabled design. In the view of this article, this is a different topic, more focused on tracking & logging data, which is then analyzed to improve the design. This article is about good data quality when data is entered via a UI, e.g., as part of a customer journey.
How important is the structure of data teams in the organisation?
Picture an organisation that wants to become more data-driven. It implements an updated strategy by hiring data specialists in business intelligence, data architecture, data engineering & data science. The organisation does not yet have a clear vision on how to structure & manage this new field of specialists. Small data teams pop up within the various business departments & IT. They work in close collaboration with business experts to create an impact & an appetite for data-driven change. As the number of data specialists in the organisation grows, it creates a need for standardisation, quality control, knowledge sharing, monitoring and management.
Sounds familiar? Organisations worldwide are in the process of taking this next step. In this blog, we will discuss how to structure & integrate teams of data specialists into the organisation. We will base these discussions on Accenture’s classification and AltexSoft expansion on these.
Two key elements are essential when discussing the structure and management of data teams.
The element of control
Customers and the organisation need work to be delivered predictably with quality in control. In other words, tooling, methods & processes need to be standardised among data specialists. Output can be planned & communicated, delivery of output is recognisable & easy to use, and assurances can be given on the quality of work by adherence to standards. Adding to the control element are the practices of sharing knowledge & code base between specialists and centralised monitoring & management.
The element of relevance
Data specialists rely on domain expertise to deliver output that is relevant to the organisation and its customers. Domain expertise is gained by working closely with business experts within and outside the organisation. Expertise building is slow & specific to the domain. Relevance and speed in delivering go hand in hand. Data specialists create maximum value when working closely & continuously with business experts. Adding to the element of relevance are the practices of customer-centricity, value-driven developing adaptability to the given situation in tooling, methods & processes.
The elements of control & relevance determine the success of the next step in data-driven change. The structure & integration of data teams depends on the amount of control and relevance required by the organisation. We will discuss three common approaches for structuring teams.
This approach maximum leverages the relevance element. In the decentralised approach, specialists work in cross-functional teams (product, functional or business) within the business departments. This close collaboration allows for flexibility in the process & delivery of output. Communication lines within the cross-functional teams are short, business knowledge for the data specialist is generally high. Central coordination & control on tooling, methods & processes is minimal as expertise & people are scattered across the organisation. Organisations implementing this approach may have less need for control or, in many cases, are just starting data-driven work and therefore lack the need for elaborate control measures.
As the name suggests, this approach centralises data expertise in one or more teams. This approach leverages the control element. Data specialists work closely together, enabling fast knowledge sharing. As the data specialist work in functional teams management, including resource management, prioritisation & funding is simplified. Centralisation encourages career growth within functional teams. Standardisation of delivery & results are common & monitoring on these principles is done more efficiently. Communication lines with business experts & clients are longer. Creating the danger of transforming the data teams into a support team. As team members work on a project by project base, business expertise is decentralised within functional data teams, adding lead time to projects. Organisations implementing this approach may have a high need for control & standardisation. Furthermore, as work is highly prioritised & resource management coordinated, the centralised approach supports organisations with strict budgets to take steps to data-driven work.
Center of Excellence (CoE) approach
The CoE is a balanced approach trying to optimise both the element of control & relevance. In this approach local pockets of data specialists work within businesses teams (cross-functional) while a Center of Excellence team enable knowledge sharing, coordination and standardisation. The data-savvy professionals — aligned with the business teams — build business expertise and have efficient communication lines within business departments. The CoE team provides a way for management to coordinate and prioritise (cross-) department tasks and developments by enabling CoE team members to support the local data specialist (commonly called SWAT technique). Furthermore, the CoE team is tasked with standardising delivery & results across the organisation. Organisations implementing the CoE approach need local expertise within units to support daily operations and a more coordinated team to support & standardise. As data specialists work in both the business department and the Center of Excellence, the organisation needs to support both groups financially. A higher level of maturity in working data-driven is required to justify the SWAT-like approach in high priority data projects.
Concluding control & relevance are two critical elements to consider when deciding how to integrate & structure data teams within an organisation. We elaborated on three common approaches, centralised, decentralised and the Center of Excellence. Each balancing control & relevance. Which structure will work for your organisation depends on the current level of maturity and the need for either control or relevance.