In a changing digital world, old business models are being disrupted and the winners will be those who can adapt quickly. We are in the transition from web 2 into web 3. Web 1, 2 or 3, it doesn’t really matter, because it is all data, right? Then pay attention because it is not that simple.
“it doesn’t really matter, because it is all data, right? “
Web 2 is company centric, with a need for data availability, quality and interoperability for monetization. Topics most companies struggle with. Web 3 is user centric, meaning users own and control (their) data, with the potential share, collaborate and monetize data.
The last decade, the focus was on web 2. You can recognize this in the high volume of cookies and apps to capture consumer data. The growth of central data platforms that store, harmonize and share data. And the rise of the function of data scientist.
How will the web 3 differ? Web3 is powered by a decentral network of peers that enable data sharing between users and applications. This allows for a more easy, transparent, and secure exchange of data. Exactly the struggles of web 2.
This data exchange is gaining traction in the mainstream, supported by the now acceleration increase of blockchain and on the horizon, the (meta)verse (The Metaverse: new frontier that re-imagines retail & health. And data re-imagines the Metaverse. – D8A directors). Decentral data sharing in web 3 allows users to interact directly without a central intermediary company. It also allows for an even faster growth of data, which will require adjustments in the analysis of data as well as an opportunity to increase development of AI solutions. And – important – it gives the user control where and how to share which data with who. In theory, the ultimate data privacy.
Web 3 does require rethinking data security, good to keep that top of mind.
Blockchain (or smart contracts) is a good example of how data provenance (what is the source of the data, initial data quality and which data processing activities have taken place. Provenance is important for companies and consumers alike) can be controlled. By capturing every step of the way in smart contracts, every step is retraceable. Relevant for companies, e.g. for fraud detection and prevention. Or being able to build a – decentral – large volume of scientific data, one of the key challenges today.
And for users, blockchain is relevant to understand how their data is used and if it is high in demand, thereby creating monetization possibilities for users. Increasing the incentive for data sharing. Decentralized data democratises data access, enabling companies AND users to create, deploy, and use specific instruments that were before restricted Here the shift from company-centric to user-centric becomes clear. Data can become your income.
Either in web 2 or 3, data is the foundational layer of innovation. Web 3 extends that with building meaningful relationships between companies and users (cut-out the middle men), improved quality of data, elevated possibilities and value of digital transactions and increased data privacy.
As a user, what do you need to have in place? Create a thorough understanding of your rights as owner of your own personal data. Share your data were and when you want, but act wisely. Understand which data can have value where and monetize accordingly. Have your data available in a personal data vault in accordance with Self Sovereign Identity principles. In short, become a data expert!.
There has never been a better time to create impact with data & analytics. More and more data is available, computing power is increasing fast and analytical techniques are getting mature. Being data driven is the talk of the town, for sure it is part of the strategy of your organization. In the last decade, most companies have invested in data & analytics initiatives to enhance efficiency, increase sales and comply to regulation. Yet, these initiatives have not yet resulted into full business value. Organisations are getting ready for the next wave; getting value out of data & analytics products.
Best practice model to realize data-driven products and services
1. Climbing fast; the importance of data in value creation
Data is an asset and has (future) economic benefits. During the last years, the volume, complexity and richness of data has grown exponentially — mainly driven by e-commerce and Internet of Things or sensor data-and is expected to continue to do so (see also: McKinsey). In fact, so much of the world is instrumented that it is difficult to actually avoid generating data. We have entered a new era in which the physical world has become increasingly connected to the digital world. Data is generated by everything from cameras and traffic sensors to heart rate monitors, enabling richer insights into human and “thing” behavior.
Add to that the current growth in analytical power (e.g. analytics, machine learning, artificial intelligence). And the confluence of data, analytical and computational power today is setting the set for the next wave of digital disruption and data driven potential for growth.
“… the confluence of data, analytical and computational power today is setting the set for the next wave of digital disruption …”
This growth has a number of preconditions. Of course, organisations need to recognize that data is an asset. It also requires that required data is correct, available and (re-)usable. And potential revenue generation need to be qualified, e.g. through data marketplaces, data-as-a-service integration, digitization of customer interactions, product development, cost reduction, optimize operations and improving productivity.
In the last ten years data & analytics initiatives within organisations mainly focused on:
→ Controlled data & analytics, e.g. data organisation, data governance, privacy or trusted analytics;
→ Centralized source of available data, e.g. data platform or data engineering;
→ Insights value chain, e.g. deriving insights through use-case based machine learning, process mining or BI self-service by a team of analytics experts.
Not all initiatives bring the desired results. For example, deriving new insights is often considered as innovative, but any executive will recognize the sprawl of self-generated BI reports each claiming their own version of the truth, making it complex and time consuming to turn insights in to company steering. And although these initiatives are by-and-large based on business cases such as efficient reporting, comply to regulations or end-of-life for legacy systems, controlling and centralizing data & analytics, they are in fact supporting hygiene factors. And there is the trend that algorithms seem to be commoditizing, e.g. Google and Amazon are providing free access to their analytics libraries. In the end, this trend will transform any insights value chain into becoming a hygiene factor as well.
In any case, the results of most current data & analytics initiatives are not a breakthrough innovation or digital disruption.
2. Approach your data with a product mindset
So, while most data & analytics efforts are — still — performed to facilitate and improve the insights value chain, the real innovation is productization of data & analytics. Organisations need to look beyond their team of skilled data & analytics professionals with governed data sources, latest analytics tools and technologies if they want to leverage data to improve and increase revenue. To actively contribute to this, organization should start viewing data & analytics through a product development lens.
This means that we need to transform from data & analytics (point) solutions mainly focused on internal value towards the creation of full-fledged data & analytics products. Productization involves abstracting and standardizing the underlying principles, algorithms and code of successful point solutions until they can be used to solve an array of similar and relevant business problems. In the end, this should lead to a robust portfolio of data & analytics products.
To enable this organisations need to have the following foundation in place:
Bridge the gap between data & analytics and business – In many organisations, data & analytics and business execution are totally separate. The business lacks understanding of what is possible and therefore will ask for everything, without prioritization and lacking a requests funnel. This leads to the development of data & analytics point “solutions” without full business potential. Move beyond the current hype of ‘data literacy’ and actually involve relevant (business) stakeholders into data & analytics. Embrace change. And be practical by starting with data quality, ownership or relevant use-cases to improve daily operations through analytics, BI or robotics. Expand from there and be persistent. Truly and sustainably embedding data & analytics in an organisation is a long-term process.
Anchor data & analytics competences at executive level – Business impact from data-derived insights only happens when data & analytics is implemented deep within and consistently throughout the organization. This requires commitment, ownership, sponsorship and direction of a leader with the authority and sufficient understanding of data & analytics and its potential.
Understand potentials— the value of data & analytics depend on uniqueness and end uses How to monetize the potential of data & analytics? Its value comes down to how unique it is, how it will be used, and by whom. Understanding the value is a tricky proposition, particularly since organizations cannot nail down the value until they are able to clearly specify its uses, either immediate or potential. Data & analytics may yield nothing, or it may yield the key to launching a new product line or making a scientific breakthrough. It might affect only a small percentage of a company’s revenue today, but it can be a key driver of growth in the future. General rule of thumb is that uniqueness of data will increase its value, so find that (hiddenn) gem. Where possible, join unique sets to further enhance the value and potential of data. Product development takes an investment in time, people and technology. Set up a — technical — test-and-learn lab environment where pilots and beta-version products can be developed and by which the value can be further explored and understood. Include domain experts, data scientists, data experts. Capture client- and end-user needs in this lab environment and transform it into solutions and products. Identify quick wins for early adopter clients, to learn and develop how products work in a client environment. Set up a cooperation with sales departments and potential partners. Standardise, improve and scale products .
Take sufficient time — be lean where possible – Many organisations have invested resources and investments in data & analytics initiatives, e.g. hiring and/or educating data scientists, data lake implementations and data ownership. They are eager to finally monetize the data so that it indeed is ‘the new oil’. But if products are unclear or without market relevance, there is the risk of missing targets and being overtaken by competitors. At the same time, be opportunistic for quick results, perform pilots as much as possible to create an early adopters client base.
3. The coming wave: data & analytics product opportunities
Potential data driven product opportunities are well researched, identified and described. Think about e.g. IoT-based analytics for leasing companies and car insurers, real-time supply and demand matching for automotive, logistics and smart cities, personalized e-commerce and media, data integration between banks and B2B customers and data driven life sciences discovery. Besides, resolving the above mentioned foundation, a detailed approach on to realise these opportunities is less clearly defined. This paragraph contains the 5 main steps that all organisations should follow:
Step 1: Conceptualize the product
Identify a data & analytics product that meets the market needs within the lab environment. To identify relevant opportunities, include product expert, business groups (e.g. super users, sales and marketing) and -potential — clients. The process involves product definition and identification of data required for the product (which should include sourcing data creatively). Organisations with unique internal data have an increased opportunity to create highly valuable products with a good competitive edge. E.g. a bank with an agricultural background can use unique data which are highly sought after by other financial institutes. And a supply chain company can enhance their planning software with integrated robotics to increase efficiency for their clients, enhancing churn and sales opportunities. Take the uniqueness of available data into account early on in the process. Determine the market position and potential business model for the prototype. There are three main prototype categories, i.e. a data-as-a-service product, algorithms code performing robotics and analytics & BI and software code containing interactive insights based on analytics algorithms.
Step 2: Acquire and build
Data as foundation for new products is traditionally captured internally and externally to support daily operations and reporting & insights. Given the vast amounts of data being available from commercial and public sources, extend the purpose of data acquisition for productization. Acquired data needs to be correct, timely, understandable, with a clear provenance — including restrictions for usage & storage and in accordance with regulatory compliance such as privacy laws.
To design and build correct algorithms supporting robotics and/or analytics products, an analytics pipeline needs to be established. In this pipeline correctness, reusability, bias, quality and provenance of algorithms and quality of code will be managed. Integrating CI/CD (continuous integration / continuous development) supports a lean and agile analytics pipeline with fast testing of the prototype value. Data and code need to be stored in an agile, scalable and secured environment. And finally, data & analytics products gain value from the context of their use, user interface and/or ease of use. So, incorporate UX design into the product development approach.
Step 3: Refine and validate
Once data is identified and (algorithm and software) code and user interfaces are designed and build, they need to be enriched, refined and validated.
Step 4: Readiness
Store data in an advanced environment where it can be integrated, queried, processed and searched. This makes data sufficiently, fast and reliable available for data-as-a-service products. Ensure that this is supported by a solid and robust data architecture. Distribution channels of algorithms and software code can be numerous, e.g. in a cloud environment where it can integrate with web- and mobile solutions. Custom made build into client environments. Or through pre-defined (API) connections made to measure for clients.
Step 5: Market and AI feedback
The competitive nature of the information product space, availability of new data sources and demand for timely decision support require an ongoing emphasis on innovation, pricing and monitoring product usage. Adding this step at this stage of the analytics-based data product development process is consistent with the iterative nature of product development in a “lean startup” context. Once again, the evolution of new technologies has provided a mechanism for facilitating a feedback and information extraction process from the marketplace
Brief recap: companies are eager to utilize the new data-oil. Not every organisation is able to do that successfully. By taking a comprehensive approach, persevering through sufficient knowledge building on ALL organisational level and starting small based on a step by step approach, you can be successful with data products and services.
In the digital world, there are two main flavours, those with extensive data and those that require extensive data.
Find your data entrepreneurship
– In this article, we leave out the data-native (Big Tech) companies -.
Those with extensive data, are in fact the (international) corporations with trusted brands, mature system landscapes and long long-lasting relationships with customers and partners. They can build upon large quantities of (historical) data, consistently used for existing processes and products. These could do much more with their data, maneuvering (the Gambit) real value out of their data.
Most corporations already invested in structural advantages for a competitive data edge: a supporting platform infrastructure, data quality monitoring, established data science teams and a data steward / data scientist attitude. For a maximal return on those investments, companies need to go the extra mile.
A strategy for data
The most common pitfall of a data strategy is that it becomes an overview of big words only, with a (too!) high focus on technology and analytics. Yet technology should be an enabler. And analytics is just a manifestation. Don’t gamble with data (products), a good data strategy starts with a clear vision, related to market, technology and (regulatory) developments. Include a target operating model to achieve the strategy. But most of all, include on the value of data. Determine use-case types that will create most value. Large corporations have an unparalleled knowledge of industry and markets and are uniquely positioned to oversee this. Of course, there are value-cases for efficiency gains and productivity improvements. Limiting to these obvious values, tends to close doors on new opportunities. Companies must have a clear ambition pathway to data-driven revenue. This new revenue can include rewiring customer interaction, creating a completely new product or business and stepping into new markets.
In practice, data driven revenues proof to be more difficult than imagined. The effort to introduce new products within new markets combined with uncertain results make companies hesitant. Without a solid and funded ambition and a defined risk appetite, this can result into only minimal innovations, such as adding data features (apps!). Compared to data-native companies, this minimal innovation sometimes seems small potatoes. A clear data strategy gives companies mature guidance for innovation KPIs, investments, risks, and market opportunities. The data strategy will help to build success and develop new services, products and even ventures.
Data equals assets
In general, there are two flavours when it comes to data within companies. Companies have less data than they realize. Or companies have more data than they realize and have an under-utilization of the data, due to insufficient awareness of its value. Understanding the value of your data is based on 5 pillars:
History
Historical data cannot be easily replicated, years of data about customers, productions, operations, financial performance, sales, maintenance, and IP are enormously valuable. Such historical data is beneficial for increasing operational efficiency, building new data products and growing customer intimacy. Although Big Tech companies have been around for some years already, they can not compete with dedicated historical data sets. If the (meta) data is of good quality, the value increases even more. Mapping where this data resides gives an up-to-date overview of relevant data throughout the system landscape.
Privacy
Corporations are highly aware of the relevance of privacy regulations and have adopted data privacy measures and controls into their data operations. This way, the data that is available is for sure in accordance with (global) data privacy legislation.
Integration
Being part of a – traditional – chain with external suppliers and receivers (e.g. supplying materials to a manufacture who sells it to a retailer) can leverage the data into multiple views on e.g. sourcing and warehouse management. Established corporations are uniquely situated to build data-chains. Having a trusted brand creates traction for cooperation and partnerships to capture, integrate, store, refine and offer data & insights to existing and new markets.
“Understanding the value of data means requires real entrepreneurship”
Extension
Large corporations can enhance existing & new products with data, e.g., through sensor data. Big Tech companies are doing that now mostly for software products. More traditional companies are particularly capable to do this for hardware products. This way of thinking is still very much underdeveloped, because it is difficult to introduce a new product or even worse, enter a new market with a new product. Yet, it is also the ultimate opportunity! Build data entrepreneurship, by starting small while understanding the full potential of data. Examples of small starts are identifying if a data model can be IP – e.g., when it is part of a larger hardware product. In real life, starting small often means focusing on a solution that is close to home, e.g., joining multiple data sets into one and/or build dashboard, which can be offered to customers as extended service. These are often chosen because of feasibility reasons. From a data product perspective, don’t consider such an approach as not small; consider it as not even starting. Companies that do not progress beyond these products should at least have a simultaneous experimental track, building and failing new products and services for lessons learned what works and what doesn’t. Understanding the value of data requires entrepreneurship (see also the example of Rolls Roycehere.)
Data entrepreneurship
Large and established corporations are the epiphany of entrepreneurship. It is at their very core. Yet, often not enough for data. Data can be so alien to them that experimenting for value is hesitant or not happening. And this is where start-up companies are not lacking. They might not have the large historical data sets, trusted data chains or easy connections with available hardware products. They do have the entrepreneurial spirit and are highly aware of the value of data. And have the capability to experiment and become successful with new products.
Becoming data entrepreneurial means knowing which data you have, understanding the (potential) value and daring to look beyond the obvious.
Geneviève Meerburg (Director SME Services at van Spaendonck) on: implementing a data strategy within her organisation. Geneviève shares how the importance and value of data organically grew, leading to a concrete need for a data strategy. Listen to hear how van Spaendonck approached truly living through the principles set out in the data strategy and how it helped create new services for their clients.
Date with D8A
Data strategy for small and medium-sized enterprises
All good data starts with business metadata. This business metadata is the information we need to build a dataset. There is someone in the business who approved the collection and processing of data in the first place. He/she also provides requirements and descriptions on what is needed. The challenge is that this information is often not managed good enough throughout time which leads business metadata quality to decrease. And thereby decrease good data. And that affects your AI solutions, BI reporting and integration with platforms.
Become aware of the necessity and value of business metadata to enable support on data requests, make it findable and also understandable!
When we know what business stakeholders want, we can design and implement this into physical form through technical metadata. We can now build the solution or buy it of the shelf and map it to the business metadata.
Operational metadata
Now that we know what data we need, what it means and have a place to store and process data; we can start doing business. Doing business will generate operational metadata. Operational metadata is very valuable in monitoring our data processes. We get insights in what data is processed, how often, the speed and frequency. This is great input in analysing the performance of our IT landscape and see where improvements can be made. Further we monitor the access to systems and data. When we take it a step further we can even start analysing patterns and possibly spot odd behaviour as signals of threats to our data.
Step into the driving seat capturing and analysing your operational metadata and become pro-active in controlling your IT landscape!
Social Media metadata
Finally we take the social metadata as an inspiration. This is where the value of your data becomes even more tangible. Value is determined by the benefit the user experiences. The way that he uses the data is then an indicator of value. Thus if we start measuring what data is used often by many users, this data must be important and valuable. Invest in improving the quality of thatdata to improve the value created. Behaviour is also a good indicator to measure. How much time is spent on content and which content is skipped quickly. Apparently that content doesn’t match up with what the user is looking for.
Measure social metadata to analyse what data is used often by many. It is likely to be more valuable than other data.
Business metadata
Governance metadata All metadata required to correctly control the data like retention, purpose, classifications and responsibilities. – Data ownership & responsibilities – Data retention – Data sensitivity classifications – Purpose limitations
Descriptive metadata All metadata that helps understand and use and find the data. – Business terms, data descriptions, definitions and business tags – Data quality and descriptions of (incidental) events to the data – Business data models & bus. lineage
Administrative metadata All metadata that allows for tracking authorisations on data. – Metadata versioning & creation – Access requests, approval & permissions
Technical metadata
Structural metadata All metadata that relates to the structure of the data itself required to properly process it. – Data types – Schemas – Data Models – Design lineage
Preservation metadata All metadata that is required for assurance of the storage & integrity of the data. – Data storage characteristics – Technical environment
Connectivity metadata All metadata that is necessary for exchanging data like API’s and Topics. – Configurations & system names – Data scheduling
Operational metadata
Execution metadata All metadata generated and captured in execution of data processes. – Data process statistics (record counts, start & end times, error logs, functions applied) – Runtime lineage & ETL/ actions on data
Monitoring metadata All metadata that keeps track of the data processing performance & reliability. – Data processing runtime, performance & exceptions – storage usage
Controlling (logging) metadata All metadata required for security monitoring & proof of operational compliance. – Data access & frequency, audit logs – Irregular access patterns
Social metadata
User metadata All metadata generated by users of data to – User provided content – User tags (groups) – Ratings & reviews
Behavior metadata All metadata that can be derived from observation to – Time content viewed – Number of users/ views/ likes/ shares
Bart Rentenaar (Enterprise Data Lead at Athora) on: implementing data innovation within his organisation. Bart shares examples of use cases that inspired him to get start with data innovations, the framework that employs to structure initiatives, examples of data innovations he implemented and the team that made that possible. Listen for tips when starting out with the implementation of data innovations.
What is data monetization? According to McKinsey, it is the process of using data to increase revenue, which the highest performing and fastest growing companies have adopted and made an important part of their strategy. Internal or direct methods including using data to make measurable business performance improvements and informed decision making. External or direct methods include data sharing to gain beneficial terms or conditions from business partners, information bartering, selling data outright, or offering information products and services (Definition of Data Monetization — IT Glossary | Gartner).
How to deploy a data monetization strategy
Companies that innovate through data monetization recognize this, monetization can be tricky. Get it right, and you have happy customers and users who are willing to pay for your product. But mis-prioritize and your audience numbers quickly drop, along with your revenue. Building data monetization based on the principles of ‘trusted data’ ( mitigates the risk of mis-prioritisations).
There is no clear-cut answer how a data-driven product generates revenue, or when that is appropriate. And there will, of course, be some products that never monetize.
Having a strategy will deliver guidance. A data monetization strategy is, simply put, a plan to generate revenue for data-driven products. Like with any plan, it guides and brings structure. It is not something that is fixed — it should be flexible enough to develop with the product, the market the product exists in and its users. Goals can and will change over time, and so strategies need to evolve to continuously achieve the goals they’re designed to target. Data products can be based on loyalty & subscription models or a on-time purchase model. It is important to understand at the beginning of the strategy which model(s) the data can leverage to create focus and scalable results.
Data monetization strategy must be built upon the following pillars:
* Understanding how data can be converted into value (see below) and the associated opportunities and challenges of data-based value creation;
* Strategic insights into improving and preparing data to support monetization;
* Strategic insights in the potential value, markets and ecosystems.
Opportunities for monetization
Data driven business models help to understand how data can discover new opportunities. This can be focused on value for efficiency (reducing costs and risks), value for legislation (comply with relevant regulations) and value by maximizingprofits, by increasing impact on customers, partnerships and markets. This can include embedding data models, metadata and analytics into products and services. Data monetization needs to be scalable, flexible and user friendly, thereby providing advantages for the company and its customers.
Indirect monetization includes data-drive optimization. This involves analyzing your data to gather insights in opportunities to improve business performance. Analytics can identify how to communicate best with customers and understand customer behavior to drive sales. Analytics can also highlight where and how to save costs, avoid risk and streamline operations and supply chains.
“Having a full understanding of monetization possibilities will help to keep an open mind.”
Direct monetization of data is where things get interesting. It involves selling direct access to data, e.g. to third parties or consumers. This can be in raw form (heavily anonymised due to privacy regulations), aggregated, metadata only or transformed into analysis and insights.
Data-as-a-service
This is the most direct data monetization method. Data is sold directly to customers (or shared with partners for a mutual benefit). The data is e.g., aggregated and/or anonymised, to be fully in accordance with legislation. And to enable trusted data. Buyers mine the data for insights, including combining it with their own data. Or use it for AI solutions within a software. Ecosystem play is the newest area for Data-as-a-service.
Insights-as-a-service
This applies analytics to (combined) internal and external data. It focuses on the insights created using data, rather than the data itself. Either the insights are sold directly or provided as e.g. analytics enabled apps.
Ecosystems-as-a-service
This is a more flexible type of data monetization. The data ecosystem provides highly versatile, scalable and shareable data and/or analytics, when needed in real-time. Standardized exchanges and federated data management enable using data from any source and any format.
Analytics-as-a-service
This is the most advanced and exciting way of monetizing data. Analytics-as-a-service seamlessly integrates features such as dashboards, reports and visualization to new and existing applications. This opens up new revenue streams and provides powerful competitive advantage.
Having a full understanding of monetization possibilities will help to keep an open mind. Where many companies are focusing on analytics products & services, there are more opportunities! Always stay within legal & ethical boundaries, but explore all opportunity formats to grasp new markets.
Marketeers and dedicated advertising benefit from good data quality
Google announced its intentions to kill off the tracking cookies (so called 3rd party cookies) within its Chrome browser. Cookies which advertisers use to track users around the web and target them with dedicated ads. Google is not the only major player altering the digital ad landscape. Apple has already made changes to restrict 3rd -party cookies, along with changes to mobile identifiers and email permissions. Big Tech altering 3rd party cookies is caused by the need to be respectful of the growing data privacy consciousness. Most consumers don’t like the feeling of being tracked across the internet (70% of U.S. adults want data regulation reform and 63% of internet users indicate that companies should delete their online data completely).
For most marketeers, this paradigm change presents huge challenges to enable customer acquisition by tracking users and targeting them with dedicated digital advertising.
On the other hand, 3rd party cookies are inherently problematic, from limited targeting capabilities, inaccurate attribution to the personalization & privacy paradox. Their loss presents an opportunity to provide a smaller group of high-value customers with higher-caliber and increasingly personalized experiences. In other words, losing these cookies might become a blessing in disguise.
Confronting data acquisition challenges in a cookie-less future
For all the shortcomings of 3rd party cookies, the marketing industry does not yet have a perfect answer for how to acquire customers without them. Marketeers are waking up to the impactfull change they are facing. One potential answer to the loss of 3rd party cookies can be that they will be replaced with 1st & 2nd* party data, i.e., gathering data shared directly by the customer, such as an email address, phone numbers and customer authenticators (see below). This data can become the mutual currency for the advertising business. First party data can be hard to obtain, you need to “earn” it, including solutions on how to gain good quality 1st party data.
Technology Section
Some solutions focus on technology, e.g.,Google’s Federated Learning of Cohorts (FLoC). A type of web tracking that groups people into “cohorts” based on their browsing history for interest-based advertising. Other technology solutions include building a 1st and 2nd party data* pool, i.e., a Customer Data Platform (CDP). CDPs are built as complete data solution across multiple sources. By integrating all customer data into individual profiles, a full view of customers can be profiled. Another solution are private identity graphs that hold all the identifiers that correlate with individuals. Private identity graphs can unify digital and offline first-party data to inform the single customer view and manage the changes that occur over time (LINK?). this helps companies to generate consistent, persistent 360-degree view of individuals and their relationship with the company, e.g., per product brand. All to enable stronger relationships with new and existing customers.
Earning good quality data will increase the need for standardized and good quality customer journeys. And therefore, the need for standardized and good quality data. Where previously, design and data quality were not closely connected, the vanishing 3rd party cookies now acts as catalyst to integrate both.
Data quality is usually an unknown phenomenon for most designers**, design companies, front- & back-end software developers and marketeers. It requires a combined understanding of multiple domains, i.e., the user interface where data will be captured, the underlying processes which the captured data will facilitate, data storage & database structures and marketing (analyses) purposes.
Finding the expert that has all this combined knowledge is like finding a real gem. If you do, handle with care! It will be more likely that all domains will need at least an understanding how their domain enhances and impacts the other domains.
For the (UI/UX) designer:
Have a good knowledge of data quality rule types. What is the difference between a format & accuracy type? Is timeliness of data relevant? What are pitfalls for data quality rules? How to integrate multiple purposes (e.g. processes, data integration & analytics) into a dedicated data quality rule.
For product owners:
Ensure that expertise of data entry and how data is used within processes at a granular level (i.e., on data field level). Onboard a so-called data steward who can facilitate the correct input for data quality. Let the data steward cooperate with front-end developers and designers.
Keep your data fresh. Data doesn’t last forever. Make sure data stewards support data updates and cleansing.
Data stewards should work with designers and front-end developers to determine which fields are considered as critical. These fields should be governed by a strict regime, e.g., for the quality and timeliness of data as well as for access to the data and usage purposes.
Personal authentication is a separate topic that needs to be addressed as such. Relying on big tech firms as Facebook or Google can seem an easy solution, however increases the risk of being dependent on an external party. Yet authentication needs to be earned to build authentic customer relationships. When customers give a company a verifiable durable pieces of identity data, they are considered authenticated (e.g. signing up for a newsletter or new account via email address). This will be a new way of working for most companies. Therefore, data stewards need to up their game and not only know existing processes but extend their view, understanding and knowledge towards new developments.
Data stewards must align with the Data Privacy Officer on how to capture, store and process data. When it comes to privacy, compliance and ethics, you can never play it too safe.
For data storage & databases:
Ensure that data architecture (or at least a business analyst) is involved in the design process. This is sometimes resolved by the back-end developer (who cannot work without aligning with the architect office on data integration, models for databases and data definitions).
If standardized data models and/or data definitions reside within the organization, this should be part of the database development. Refer to authoritative source systems where possible.
If the application is made via low-code, standardization of existing data models/architecture, data definitions and data quality rules is often part of the approach. Yet, data quality checks should always take place as separate activity.
For marketeers:
Understand how customer journeys can facilitate 1st and 2nd party cookies. Determine which data is needed for insights. Gather insights requirements and work together with the data steward to define data quality rules that facilitate your insights. Now that the 3rd party source is limited, the value of the customer journey for marketing increases!
Privacy is one of the catalysts to make 3rd party cookies disappear. This requires a new approach for acquiring personal data for marketing and ad targeting. New developments that require new skills and more importantly, a new cooperation between existing domains. Companies that enable this, will lead this new way of working.
Footnotes:
* Data from 1st party cookies = occur only within a company’s own domain. & data from 2nd party cookies = ca be used within and outside a companies’ own domain. This article takes mostly 1st party data into account. For 2nd party data, you can further investigate e.g., ‘data co-ops’, complementary companies that share data. Each member of the co-op should relate to the others in a meaningful way because outside of your own web domain, you’ll be able to reach customers only on your partner sites — and this reflects on your brand.
** Of course, there are designers work who work with data enabled design. In the view of this article, this is a different topic, more focused on tracking & logging data, which is then analyzed to improve the design. This article is about good data quality when data is entered via a UI, e.g., as part of a customer journey.
How important is the structure of data teams in the organisation?
Picture an organisation that wants to become more data-driven. It implements an updated strategy by hiring data specialists in business intelligence, data architecture, data engineering & data science. The organisation does not yet have a clear vision on how to structure & manage this new field of specialists. Small data teams pop up within the various business departments & IT. They work in close collaboration with business experts to create an impact & an appetite for data-driven change. As the number of data specialists in the organisation grows, it creates a need for standardisation, quality control, knowledge sharing, monitoring and management.
Sounds familiar? Organisations worldwide are in the process of taking this next step. In this blog, we will discuss how to structure & integrate teams of data specialists into the organisation. We will base these discussions on Accenture’s classification and AltexSoft expansion on these.
Two key elements are essential when discussing the structure and management of data teams.
The element of control
Customers and the organisation need work to be delivered predictably with quality in control. In other words, tooling, methods & processes need to be standardised among data specialists. Output can be planned & communicated, delivery of output is recognisable & easy to use, and assurances can be given on the quality of work by adherence to standards. Adding to the control element are the practices of sharing knowledge & code base between specialists and centralised monitoring & management.
The element of relevance
Data specialists rely on domain expertise to deliver output that is relevant to the organisation and its customers. Domain expertise is gained by working closely with business experts within and outside the organisation. Expertise building is slow & specific to the domain. Relevance and speed in delivering go hand in hand. Data specialists create maximum value when working closely & continuously with business experts. Adding to the element of relevance are the practices of customer-centricity, value-driven developing adaptability to the given situation in tooling, methods & processes.
The elements of control & relevance determine the success of the next step in data-driven change. The structure & integration of data teams depends on the amount of control and relevance required by the organisation. We will discuss three common approaches for structuring teams.
Decentralised approach
This approach maximum leverages the relevance element. In the decentralised approach, specialists work in cross-functional teams (product, functional or business) within the business departments. This close collaboration allows for flexibility in the process & delivery of output. Communication lines within the cross-functional teams are short, business knowledge for the data specialist is generally high. Central coordination & control on tooling, methods & processes is minimal as expertise & people are scattered across the organisation. Organisations implementing this approach may have less need for control or, in many cases, are just starting data-driven work and therefore lack the need for elaborate control measures.
Centralised approach
As the name suggests, this approach centralises data expertise in one or more teams. This approach leverages the control element. Data specialists work closely together, enabling fast knowledge sharing. As the data specialist work in functional teams management, including resource management, prioritisation & funding is simplified. Centralisation encourages career growth within functional teams. Standardisation of delivery & results are common & monitoring on these principles is done more efficiently. Communication lines with business experts & clients are longer. Creating the danger of transforming the data teams into a support team. As team members work on a project by project base, business expertise is decentralised within functional data teams, adding lead time to projects. Organisations implementing this approach may have a high need for control & standardisation. Furthermore, as work is highly prioritised & resource management coordinated, the centralised approach supports organisations with strict budgets to take steps to data-driven work.
Center of Excellence (CoE) approach
The CoE is a balanced approach trying to optimise both the element of control & relevance. In this approach local pockets of data specialists work within businesses teams (cross-functional) while a Center of Excellence team enable knowledge sharing, coordination and standardisation. The data-savvy professionals — aligned with the business teams — build business expertise and have efficient communication lines within business departments. The CoE team provides a way for management to coordinate and prioritise (cross-) department tasks and developments by enabling CoE team members to support the local data specialist (commonly called SWAT technique). Furthermore, the CoE team is tasked with standardising delivery & results across the organisation. Organisations implementing the CoE approach need local expertise within units to support daily operations and a more coordinated team to support & standardise. As data specialists work in both the business department and the Center of Excellence, the organisation needs to support both groups financially. A higher level of maturity in working data-driven is required to justify the SWAT-like approach in high priority data projects.
Concluding control & relevance are two critical elements to consider when deciding how to integrate & structure data teams within an organisation. We elaborated on three common approaches, centralised, decentralised and the Center of Excellence. Each balancing control & relevance. Which structure will work for your organisation depends on the current level of maturity and the need for either control or relevance.
Transform data as a by-product into data-fueled value streams
Up until recent, data — in most cases — is perceived as by-product. With the potential to deliver (new) insights. This is the case for e.g., IoT sensor data which is mainly used for process automation & preventive maintenance within manufacturing or healthcare. A secondary stream of data usage. Companies with an open mind for opportunities are now realizing that by embedding these IoT sensors across product lines, manufacturing facilities or patient journeys, the generated data — and the services it can deliver — can become a strategic asset.
Any company should ask the following: Does the company generate lots of data? Or is it a data rich company that produces and/or maintains market leading products?
Answering this question requires a real-data driven mindset. There is no doubt that the value of preventive maintenance is valid. Yet, it is not scalable enough to become that data rich company. And if the investment for (IoT) data can be returned in multiples, companies should be creative and look for opportunities beyond just predicting equipment issues and maintenance requirements.
Don’t just act data-driven, be it! Look beyond the initial goal of the IoT sensors and the data that is generated. Get at the research & business chair. And drive new products and services. Data initially generated by IoT* for automation, can provide new insights in e.g. live performance of products, providing customers with valuable services. A good example is the Rolls Royce IntelligentEngine. The IntelligentEngine IoT sensor data — aggregated and analyzed in the cloud — is providing Rolls-Royce with unprecedented insights into the live performance of its machinery. A modern passenger jet generates an average of 500GB of data per flight(!) and several terabytes on long-haul routes. The thousands of sensors in each Rolls-Royce engine track everything from fuel flow, pressure and temperature to the aircraft’s altitude, speed, and air temperature, with data instantly fed back to Rolls-Royce operational centers. The company’s aircraft availability center is continuously monitoring data from 4,500 in-service engines. Rolls-Royce can tap into a — cloud based — ecosystem of small, specialist 3rd parties to analyze different parts of available data. And that data capability is rapidly evolving to providing customers with valuable aftermarket services that range from showing airlines how to optimize their routes to keeping a survey ship in position in heavy seas.
Next to this, the company recently launched another data-centric initiative; R2 Data Labs. This act as an acceleration hub for data innovation. “Using advanced data analytics, industrial artificial intelligence and machine-learning techniques, R2 Data Labs will develop data applications that unlock design, manufacturing and operational efficiencies within Rolls-Royce, and creates new service propositions for customers,”.
Combining data & analytics into potential new value streams is often a first step within companies that are aiming to become more data driven. In practice, this can bog down into analytics MVP without generating a scalable product or services which is understood, carried and supported by business stakeholders, account management and sales. This connection needs to be in place to be successful [ link naar andere artikel MV]. The R2 datalab shows what additional circumstances and behavior is needed for success. At its heart, Data Innovation Cells will comprise experts drawn from multiple disciplines across the company and apply cutting-edge DevOps principles to rapidly explore data, test new ideas, and turn those into new innovation and services. In other words, successful data products & services comprise of a fusion of the following elements: a technological (ecosystem) environment, sufficient internal & external data, understanding the value of data, trusted data & analytics, an understanding existing & new markets and product development.
Rolls Royce had the advantage of having available data, sufficient circumstances as well as a very good market understanding. They were part of a mature market and ‘just had to tap into’ data to deliver relevant new products.
There are multiple examples of data driven value streams within existing and upcoming companies. You should check e.g., Google Spin-off Examples of The Climate Corporation / ClimateEngine, that enable crop insurances based on satellite data or Twiga Foods that introduces smart crates with tags for real-time data collection, thereby enabling food distribution in Kenya. Examples where data is driving new value models. First Access and Tala use data from mobile phones to provide alternative credit scoring services that help financial services providers assess the risk of people at the base of the pyramid. BBOXX has developed the Bboxx Pulse® platform which harnesses remote (real-time payment) monitoring data and IoT data to deliver energy access in a scalable and distributed model.
Value models
The above mentioned examples show how data can be monetized via different value models. By comparing monetization models and determining which is best suited data value offerings, companies can increase revenue margins and introduce beneficial new products and features, while expanding customer relationships and delivering specific value that keeps your clients coming back for more.
1. Perpetual model: the traditional model where customer pay for a product once upfront. And then have a perpetual rite to use the product (e.g. the raw data, aggregated data, meta data or insights). The seller has full responsibility for upkeep and updates.
2. Subscription (‘as a service’) model: here; Data-as-a-Service or DaaS. The customer buys service subscriptions to access of the data, right to use the data and updates & support. Advantage is that DaaS provides a predictable revenue stream that can be projected into the future. This revenue predictability has made the subscription model — similar to the software industry — increasingly popular.
3. Usage model: In this leading-edge monetization model, customers pay providers based on specified usage metrics ( e.g., pay per tick, when data is needed real time. Number of tests performed per data set. Or another data-related (technically it is software) example is the data cloud providers metric where customers are charged based on terabytes of data storage) with periodical invoicing. A customer pays for what they use.
4. Outcome model: A quite leading-edge and interesting model. Suppliers are not selling data products or services, they’re selling an outcome. Something like those law firm commercials that say, “We don’t get paid unless you see cash,” this model is about achieving a defined business result rather than delivering individual IoT data. This model is used by Rolls Royce and Climate (now Monsanto).
5. Impact model: Companies, e.g., some the largest agricultural companies in the world are collaborating to help eradicate Malaria by 2040. This is funded by donors. Some business models can create value by measuring social impact and reporting it to relevant bodies, such as government departments, donors and impact investors. This enables companies to think beyond data business models. Additional value streams for this could be generated by payments as grants or in ‘pay by results’ schemes.
*Note, this article has examples of IoT sensor data, which mostly does not include privacy related data. However, for all data monetization efforts it is applicable that they always must be performed within the requirements of the data privacy regulations in force as well as companies’ ethical guidelines.