Token Economics 101

Background

Much has been discussed about how modern internet based digital platforms such as Google, Facebook, and EBay to name a few, while they have created market value and consumer choice, are prone to rent seeking due to monopolistic winner-take-all effects (There is No Single Solution To Making Internet More Decentralized). Not just responsible for causing economic distortions, platforms are prime targets for cyber attackers, as recent incidents such as Equifax and Sony security breaches have demonstrated. Further, although such platforms have created consumer surplus, they have also led to consumer deficits of their own, by taking away privacy and control of one’s data. This view of the platforms is a part of a larger story of internet’s balkanization: emergence of independent domains thanks to platforms that do not interoperate, proprietary networks resulting from regulations that target net neutrality and geographic islands due to governments restricting free flow of information. Platform companies, which have built their platforms atop the internet, get blamed the most for this failure. The real failure, however, is that of the market that failed to create effective structures to govern and to incentivize appropriate development of the internet and the applications sitting on top of it.  This malaise is not just limited to Internet based markets, but extends to all existing intermediated centralized market systems of today. Rather serendipitously, the world may have discovered an alternative way to better manage the creation and development of efficient market systems: cryptocurrency or token economies. By aligning the incentives and needs of suppliers, consumers and service providers, token economies create value for all participants in an efficient manner. Indeed, it is not just Internet based solutions, but any market where goods and services are exchanged that token economies can bring efficiencies.

“Token Economy” As In Applied Psychology?

Token markets are those that whose workings are based on a cryptocurrency or token, such as Bitcoin. On one hand, such markets are guided by an explicitly defined crypto-economic system or token economy, but on the other hand, they also evolve dynamically based upon participants’ interactions. Originating in applied psychology, the term “token economy” is a system of incentives that reinforce and build desirable behaviors; essentially the token economy implements the theory of incentives used to explain the origins of motivation. Token economies are a useful tool in behavioral economics, where it has been applied both in lab and in the real world to study human decision making within economic contexts. In crypto markets, a token economy is implemented using digital assets as tokens and a framework of rules as the market protocol implemented in code using cryptography. Token economics in this sense refers to the study, design, and implementation of economic systems based on cryptocurrencies. Supporting the token economies of crypto-currency markets is distributed ledger technology (DLT), such as Blockchain. Tokenization is the process of converting some asset into a token unit that is recorded on the DLT: thus anything of economic value, such as real estate, commodities, currency markets, etc. can be tokenized, giving rise to a variety of different token economies.

What is Different?

By enabling costless verification and reducing the cost of networking, token economies streamline operational functioning of markets and open them up for broad innovation (Some Simple Economics of the Blockchain). Markets are formed when buyers and sellers come to together to exchange goods and services. Effective functioning of the markets depends upon effective verification of market transactions between the sellers and buyers, a function that market intermediaries take on as market complexity increases. The intermediary structure, however, brings its own set of issues: they can misuse information disclosed to them by buyers/sellers, conflicts of interest, agency problems, moral hazard may still persist, they can misuse their market power (as has happened with some tech giants recently) and their presence increases the overall cost structure of the market. Further, hurdles to developing trust between parties limits the extent to which business relationships develop thus limiting innovation: it is costly and time-consuming to develop and enforce contracts to guide the exchange of property rights. Through the use of DLT, token economies provide costless verification (and thus mitigate the need for intermediaries), and reduce cost of networking (and thus provides a way to efficiently exchange property rights). But what is fundamentally different about token economies is that they create a framework in which participants mutualize their interests and have strong incentives to continually improve the functioning of the economy.

Flywheels, Loops, and Knock On Effects

A key part of the token economy design is “mechanism design”, which is the design of system of incentives to encourage fruitful development of the token economy infrastructure as well as the overlying applications and services. Mechanism design addresses how tokens are used for payments to participants to manage the DLT network, service usage, profit sharing, governance and so on. An optimally designed token economy creates the appropriate mix of incentives to ensure a high performing market infrastructure, valuable services, and smooth running of the platform. As overlay protocols, applications and APIs are built on top of the base DLT (The Blockchain Application Stack), mechanism design ensures value is distributed equitably not just within a layer but across the layers. An appropriately designed token system unleashes powerful feedback loops that perpetuate desirable behaviors and actions in the marketplace – indeed, mechanism design can either make the token economy the El Dorado of all crypto economies, or be the death knell of the token even before it has had a fighting chance. Depending upon the specific market it is trying to make, each token economy will have a unique token design and a study of how feedback loops are expected to take hold, but there are some common fundamentals to how incentives work in a token economy.

Slide1

A new token system often comes to market through an ICO (initial coin offering) which allows the founding team to distribute tokens to raise capital for funding the enterprise. A lot of initial pull for the token is dependent upon the team’s vision and the soundness of the token economy. As the market’s perceived value of the token increases, participants, customers and speculators take note and invest in the token, thus increasing its demand. Typically in token economies, since the token supply has a ceiling, increasing token demand leads to increasing value (assuming token holders hold on to the token at least for some time period), which attracts even more participants, customers and speculators. Increasing number of participants in the network strengthens the network in two ways: it makes it more decentralized and secure, and participants work to improve the network and develop services on the platform. This increases the utility of the system, which attracts more customers, and further strengthens the feedback loop.

These same mechanisms can work in reverse, leading to the token economy’s death spiral if there is a perceived loss of value in the token economy.

Slide2

A token’s perceived utility can take a hit for a variety of reasons: if it does not create enough utility in the market, or if there is a security weakness that opens it up to a cyberattack. A perceived loss in the utility can lead to selloff in the market which depreciates the value of the token. As the token value depreciates, more speculators start dumping the token. A depreciating token value also disincentivizes participants from developing the token protocol and services, which reduces the token’s utility for consumers leading to reduced demand. An increased supply of tokens in the market further reduces the token’s value, perpetuating the negative feedback loop.

Token Economies Galore

Token markets originated with digital currencies for P2P payments, the first one to the market being Bitcoin, which was then followed by a number of alternative digital currencies (so called “alt-coins”) such as zCashLiteCoin and Monero as a means to address use cases for which Bitcoin was not the best solution. Tokenization moved to computing when Ethereum was launched for decentralized computing through smart contracts, and markets such as StorjFilecoin, and Sia came into being for decentralized storage, which is a challenge in its infancy for centralized cloud providers such as Amazon. Blockstack wants to go further by providing an entire decentralized infrastructure for building decentralized applications of the future, effectively replacing the current Internet based architecture. Not even current day platforms are safe: OpenBazaar offers a decentralized e-commerce marketplace, and Lazooz is a protocol for real time ride sharing. Most of the tokens these economies operate with are “utility tokens” that are used to access a service that such economies provide e.g., using decentralized storage or buying something in a e-commerce marketplace. Attention is now turning to “security/asset-backed tokens” which represent ownership in a company through cash flows, equities, futures, etc. or hard assets such as commodities or real estate. Asset-backed tokenization is in full swing, targeting commodities, precious metals and real estate (Digix and Goldmint for precious metals, D1and Cedex for diamonds, etc.). Security token offerings, like those enabled by Polymath, will provide investors exposure to token economies.

This is Just the Beginning

Just as the Cambrian period enabled the creation of a multitude of new life forms, the emergence of token economies is opening up a wide range of previously unavailable markets as well as new ways to compete against entrenched incumbents. Sure, many of the new token economies coming in to being today will die out, but many of the ones that survive will be epic. Designing the right economic model for tokens is crucial and token economics provides the necessary groundwork and framework for devising such models. More study in token economics is required especially since it involves technically complex mechanisms and market designs that are totally new. As the industry experiments with tokens across various markets, it will learn more about what is just great economic theory versus what really works in the complexity of real-world interactions.

Advertisements

Understanding The Building Blocks of a Distributed Ledger System

Introduction to DLTs

Distributed Ledger technology (DLT) is being hailed as a transformative technology with comparisons being drawn to the Internet in its potential to transform and disrupt industries.  As a “platform” technology for decentralized, trust-based peer-to-peer computing, DLT helps shape new “domain” capabilities, just as computer networking enabled the Internet and creation of capabilities across communication, collaboration and commerce. Like the Internet, it will have far reaching consequences for enterprise architectures of the future.  Not only will DLT transform the technology stack of established domains (witness how Blockchain is transforming identity management infrastructure in the enterprise), but it will also give rise to new architecture paradigms as computing moves to decentralized trust-based networks, for example, in how an enterprise interacts with its business partners, suppliers and buyers.  The Internet took 30 years to have disruptive effects in the enterprise, and DLT’s full impact is expected to play out over similar time frames.

DLT represents a generic class of technologies (Blockchain is a prominent example), but all DLTs share the concept of the distributed ledger: a shared, immutable database that is the system of record for all transactions, current and historic, which is maintained by a community of participating nodes that have some sort of an incentive (usually a token or a cryptocurrency) to maintain the ledger in good standing.  The emergence of DLT’s can be traced to back to the original blockchain applications, Bitcoin and Ethereum.  Various other distributed ledger applications have emerged to solve specific industry/domain issues: R3’s Corda in financial services, Ripple for payments, etc.  Innovation in the DLT space is proceeding at a feverish pace.  The well-established DLT based networks can be essentially segmented based on two dimensions: how ledger integrity is guaranteed through validation, and whether the ledger is private or public.

DLT and Enterprise Architecture

As participants to DLT based networks developed by industry utilities or consortiums, organizations may not have a strong need to master internal architecture design and trade-offs associated this such a platform.  However, the architecture community in those organizations will still be required to understand how the networks they are participating in work, to the extent required to understand the implications for their organizations.  Furthermore, as intra-company applications of DLT become mainstream, enterprise architects will be increasingly called to provide perspectives on most optimal design of the underlying technology.  As DLT moves from innovation labs into the mainstream enterprise, architects will need to  start preparing their organizations for accepting DLT-based applications into the organizational landscape.  A good place to start for the enterprise architects will be to understand just what the DLT technical architecture encompasses.  This involves understanding what building blocks comprise a DLT system, and what architectural decisions need to be made.

The Building Blocks of a DLT System

To understand a complex technology such as DLT, it may be helpful to draw parallels to the TCP/IP stack for computer networking, which Blockchain has been compared to in the past (The Truth About Blockchain).  While there may not be a straight one-to-one correspondence between the Internet’s OSI model and the DLT architecture, drawing the parallel helps one understand conceptually how the building blocks fit together.  The OSI model is a generic architecture that represents the several flavors of networking that exist today, ranging from closed, proprietary networks to open, standards-based. The DLT building blocks provide a generic architecture that represents the several flavors of DLTs that exist today, and ones yet to be born.

In theory, it should be possible to design each building block independently with well-defined interfaces for the whole DLT system to come together as one whole, with higher level building blocks abstracted from the lower level ones. In reality, architectural choices in a building block influence those in other building blocks e.g., choice of a DLT’S data structure influences the consensus protocol most suitable for the system.  As common industry standards for DLT architecture and design develop (Hyperledger is an early development spearheaded by The Linux Foundation) and new technology is proved out in the marketplace, a more standardized DLT architecture stack will perhaps emerge, again following how computer networking standards emerged.  There is value, nevertheless, in being able to conceptually view a DLT system as an assembly of these building blocks to understand the key architecture decisions that need to be made.

Key Architectural Tradeoffs in DLT Systems

Architecting a DLT system involves making a series of decisions and tradeoffs across key dimensions.  These decisions optimize the DLT for the specific business requirement: for some DLT applications, performance and scalability may be key, while for some others, ensuring fundamental DLT properties (e.g., immutability and transparency) may be paramount.   Inherent in these decisions are architectural tradeoffs, since the dimensions represent ideal states seldom realized in practice.  These tradeoffs essentially involve traversing the triple constraint of Decentralization, Scalability, and Security.

Decentralization reflects the fundamental egalitarian philosophy of the original Bitcoin/Blockchain vision i.e., the distributed ledger should be accessible, available and transparent to all at all times, and that all participating nodes in the network should validate the ledger and thus have the full ledger data.  Decentralization enables trustless parties to participate in the network without the need for central authorization.  Scalability refers to the goal of having appropriate level of transaction throughput, storage capacity of the DLT to record transaction data, and the latency for the transaction to be validated and recorded once it is submitted.  Scalability ensures that appropriate performance levels are maintained as the size of the network grows.  Finally, Security is being able to maintain the integrity of the ledger by warding off attacks or making it impossible to maliciously change the ledger for one’s benefit. Fundamentally, this dimension reflects a security design that is inbuilt into the fabric of how the ledger operates, and not rely on external ‘checking’ to ensure safety.

Bringing It Together: DLT Building Block Decisions and Architectural Tradeoffs

Applying the architectural decisions to the DLT system allows one to come up with different flavors of DLT systems, each making tradeoffs to navigate the triple constraint described above.  Traversing the sides of the triangle allows one to transcend different DLT architecture styles with the vertices of the triangle denoting most pure architectural states seldom realized in practice.  For example, systems like Bitcoin and Ethereum aim to tend toward Vertex A maximizing Decentralization through their decentralized P2P trustless model, and Security through their consensus building and validation methods that prevent malicious attacks (although both Bitcoin and Ethereum have been shown to have other security vulnerabilities), but sacrifice much in terms of Scalability (Bitcoin’s scalability woes are well-known, and Ethereum is only slightly better).  On the other hand, permissioned DLTs, such as Corda, aim to tend to Vertex C maximizing Scalability and guaranteeing Security, but sacrifice Decentralization (by definition, permissioned DLT’s are not transparent since they restrict access and also validation is provided only by a set of pre-authorized validating nodes), and also may suffer other security issues (both the trusted nodes and the central authority in a permissioned DLT system can be attacked by a nefarious party).  DLT variations such as Bitcoin Lightning Network and Ethereum Raiden tend toward Vertex B, aiming to use off-chain capabilities to improve Scalability of traditional Blockchain and Ethereum networks, while preserving Decentralization (despite some recent concerns that these networks have a tendency to become centralized in the long run), although their off-chain capabilities may require additional Security capabilities (they also partially move away from the Blockchain’s decentralized security apparatus).   Let’s examine how these tradeoffs come into play at the level of DLT building blocks.

Layer 3: Ledger Data Structure

Ledger Data Structure encapsulates decisions around how the distributed ledger is actually structured and linked at a physical level e.g., chain of blocks, a graph, etc.  Additionally, it captures decisions around how many ledger chains there are, and specifies if the nodes carry the entire or just a part of the ledger.  In traditional Blockchain, the ledger is structured as a global sequential linked list of blocks instances of which are replicated across all participating nodes.  This design goes hand in hand with the Proof of Work consensus protocol that traditional Blockchain has in ensuring high levels of Decentralization and Security- since each node has current instance of the global ledger chain, and there is decentralized consensus building for block validation (although, a few security vulnerabilities with Blockchain have come to the forefront and Proof Work is susceptible to centralization due to economies of scale in mining).  As we know, this design takes a toll on Scalability – Blockchain can process only a few transactions per minute and time required for processing a block is high (Bitcoin generates a new block every 10 minutes).

Some new designs are coming with alternate data structures that improve Scalability & Performance, such as NXT’s and SPECTRE’s DAG (directed acyclic graph) of blocks, which mine DAG blocks in parallel to allow for more throughput and lower transaction time, and IOTA’s Tangle, the so called “blockless” DLT’s that get rid of block mining altogether and rely on a DAG of transactions to maintain system state and integrity.  These new designs have to be implemented and used at scale, with many of these designs having their own set of challenges (some claim they will continue to rely on some form of centralization to gain scale, and also have security related challenges).  However, DLT community’s interest has been high: IOTA’s Tangle has been creating a buzz in the DLT circles has a possible serious contender in the IoT world (since its data structure and protocol is well suited for handling volumes of continual streams of data), and several blockless DLT startups have been born lately.

Tinkering with how the ledger data is stored across nodes represent another opportunity for gaining in Scalability.  For example, sharding, a concept fairly well established in the distributed database world, is coming to DLTs.  Applied to DLTs, sharding enables the overall Blockhain state to be split into shards which are then stored and processed by different nodes in the network in parallel – allowing higher transaction throughput (Ethereum’s Casper utilizes sharding to drive scalability and speed).  Similarly, Scalability can be improved by having multiple chains, possibly private,  to enable separation of concerns: “side chains” enable processing to happen on a separate chain without overloading the original main chain.  While such designs improve Scalability, they move away from DLT’s  vision of enabling democratic access and availability to all participants at all times, and also present Security related challenges, part of the reason why widespread adoption of sidechains has been slow.

Layer 2: Consensus Protocol

Consensus protocol determines how transactions are validated and added to the ledger, and the decision-making in this building block involves deciding which specific protocol to choose based on the underlying data structure and objectives related to the triple constraint. Proof of Work, the traditional Blockchain consensus protocol, requires transactions to be validated by all participating nodes, and enables high degree of Decentralization and Security, but suffers on Scalability.  Alternative protocols, such as Proof of Stake, provide slightly better Scalability by changing the inventive mechanism to align more closely with the good operation of the ledger.  Protocols such as those based on Byzantine Fault Tolerance (BFT), which have been successfully applied to other distributed systems, are applicable to private ledgers, and depend upon a collection of pre-trusted nodes.  Such protocols sacrifice Decentralization to gain in Scalability.

Ethereum’s Raiden and Bitcoin’s Lightning Network are innovations to drive scalability to Ethereum and Bitcoin respectively by securely moving transactions off the main chain to a separate transacting channel, and then moving back to the main chain for settlement purposes – the so called “Layer 2” innovations.  This design allows load to move off of the main ledger, however, since transactions occuring on the channel are not recorded on the ledger, it sacrifices Security as the transacting channels need additional security apparatus not part of the original chain, as well as Decentralization (since channel transactions are not accessible to participants).

A number of other protocols and schemes to improve scalability and security are in the works, many of which are variations of the basic PoW and PoS, and which envision a future comprising not one single ledger chain, but a collection of chains.  For example, Kadena, which uses a PoW on a braid of chains, EOS which uses a delegated PoS, and Cosmos Tendermint, which uses BFT-based PoS across a universe of chains.

Layer 1:  Computation and App Data

DLT resources such as storage and computation come at a premium, and it costs real money to submit transactions in a DLT systems.  In the topmost layer, therefore, the architectural decisions deal with providing flexibility and functionality related to data storage and computation – essentially how much of it should reside on-chain, and how much off-chain.  Additionally, this layer deals with decisions around how to integrate the DLT with events from the real world.

For computation, Bitcoin Blockchain and Ethereum provide constructs for putting data and business logic to be executed on-chain, and Ethereum is far advanced than Blockchain in this since it offers “smart contracts”, which is essentially code that is executed on the chain when certain conditions are met.  There are obviously advantages to doing all computation on chain: interoperability between parties and immutability of code, which facilitates trust building.  There is, however, a practical limit to how complex smart contracts can be, a limit that is easily reached.  Offloading complex calculation to off-chain capabilities allows one to leverage the DLT capabilities in a cost-effective and high performing manner.  TrueBit,  on online marketplace for computation, enables a pattern in which complex resource-intensive computation can be offloaded to a community of miners who compete to complete the computation for a reward and provide results that can be verified on-chain for authenticity.  While this provides upside in terms of Scalability and Decentralization, there are Security related implications of using off-chain computation, an area of active research and development.

What applies to computation, also applies to data storage in the DLT world.  While Blockchain and Ethereum provide basic capabilities for storing data elements, a more suitable design for managing large data sets in DLT transactions is through off-chain data infrastructure providers or cloud storage providers while maintaining hashed pointers to these data sets on-chain.  Solutions like Storj, Sia, and IPFS aim to provide a P2P decentralized secure data management infrastructure that can hook into DLTs through tokens and smart contracts, manage data and computation securely through such technologies as Secure MPC (multi party computation).  Similar to off-chain computation, off-chain storage has upside in terms of Scalability and Decentralization, however, there are security and durability related implications.

What provides immutability to the distributed ledger (its deterministic method of recording transactions) is also its Achille’s heel: it is difficult for the ledger to communicate with and interpret data it gets from the outside non-deterministic world.  Oracles, services which act as middle men between the distributed ledger and the non-DLT world, bridge that gap and make it possible for smart contracts to be put to real world use.  Various DLT oracle infrastructures are in development: ChainLink, Zap, Oraclize, etc.  that provide varying features; choosing the right oracle architecture is thus extremely crucial for the specific use case under consideration.  Similar to off-chain data, oracles provide upside in terms of Scalability and Decentralization, however there are security and data verifiability related concerns.

Untitled

Conclusion

These are still early days for the DLT technology, and the many improvements that need to happen to make DLT commercially implementable are yet to come.  Beyond scalability and security, DLTs face a number of hurdles in enterprise adoption, such as interoperability, complexity and lack of developer friendly toolkits.  The future is probably going to be a not just one ledger technology here or there, but a multitude, each optimized for the specific use case within an organization, and even superstructures such as chains of chains connected with oracles, middleware and such.  And these structures will not replace existing technology architecture either; they will exist alongside and will need to be integrated with legacy technologies.  Like networking, DLTs will give rise to new processes, teams, and management structures.  Enterprise architects will play a central role in facilitating the development of DLT as a true enterprise technology.

The Five D’s of Fintech: Disintermediation

Finance, as the adage goes, is the art of passing money from hand to hand until it finally disappears. Like giant spiders in the middle of a system of webs, banks have played a key role in intermediating money flows across the vast capitalistic system, and doing so in a highly profitable manner. Banking from yore – taking deposits from customers and making loans to borrowers – has given way to a lending system that is dominated by non-banking financials institutions and capital markets. This “banking disintermediation” – banks no longer holding the loans they originated on their balance sheets but selling them off; borrowers going directly to the capital markets rather than to banks to obtain a credit; or savers investing directly in securities – refers to credit disintermediation and has been in the making for a number of decades: banks moved from the boring low-profitability business of credit provisioning to high margin fee-based businesses (investment banking, M&A, insurance etc.). “Disintermediation” – the third theme in The Five D’s of Fintech – has taken on a renewed significance in the wake of the rising tide of fintech: this time around, it refers to how banks may potentially be disintermediated from their customers in their fee-based businesses. Customer relationship, the bedrock of origination and sales that dominates such fee-based businesses, is now under attack, and banks have started to take note.

Fintech led disintermediation has been palpable in areas where venture investment has traditionally poured into: peer-to-peer lending, remittances, payments, equity crowdfunding and even robo-advisory. Fintech use cases on disintermediation of traditional payment players are old news. By comparison, the impact of fintech led disintermediation of the banks in capital markets appears to be relatively small. By some estimates, only $4 billion of the $96 billion (around 4%) of the fintech investment since the beginning of the millennium has been directed to capital markets, and of 8000+ fintechs globally, only 570 (around 7%) are operating in capital markets (Fintech in Capital Markets: A Land of Opportunity).

This may be beginning to change not least because of the juicy returns that can potentially be plucked from the investment banks in origination and sales (which is the key activity in fee based business such as investment banking, M&A/payments, asset management etc.) which at an estimated 22% ROE (Cutting through the noise around financial technology), is a much more profitable business than credit provisioning (which is the key activity in core banking (lending, deposits, etc.)) with an estimated ROE of only 6%. Entry barriers to capital markets and investment banking are significant: the highly regulated nature of the business, the oligopolistic nature of the industry, vicious competition from incumbent banks, and complex operating environment. However, regulatory pressures, compressed margins and technology enabled opportunities are forcing incumbent banks to acknowledge fintech led opportunities.

The investment banking world is being buffeted by the forces of digitization, regulation, and competition. Continued electronification of OTC markets, for example, has meant greater transparency around the price discovery process for investors. Dealers have pulled back from markets where regulatory requirements have rendered them unprofitable for those banks, thus providing an opening wedge for non-bank players such as hedge funds and institutional investors to step in. The financial crisis has tarnished banks’ reputation. It is against this backdrop that fintech outfits are evolving from just providing solutions to automate and streamline bank transactions to being serious threats across asset classes (equity/debt/FX), markets (primary/secondary), and the value chain (origination through distribution and dealing). In the world of supply chain financing, technology-led disintermediation has begun to make inroads. Regulatory pressures, decreasing margins, and KYC/AML challenges have made it difficult for commercial banks to scale their supply chain finance business: the market is underserved and rapidly growing. This is changing with the emergence of such platforms as PrimeRevenue, an American supply chain financing platform, that connects 70 lenders, including 50-odd banks, to 25,000 suppliers with $7bn-worth of invoices a month. While such platforms are not overthrowing banks’ supply chain financing business completely, they are slowly but surely intermediating themselves into direct relationships with end customers: witness how C2FO has offered Citi more lending opportunities in return for access to its supply chain financing customers.

The origination market for new securities issuance is an area where incumbents are still strong but evolving conditions may favor fintech led innovation. Some large asset managers want to ensure that corporations structure securities that those asset managers want to buy, not necessarily those that a bank may structure: this may just be the kind of solution a fintech may develop. Platforms that connect investors directly with issuers are few and far between in the equity and debt primary markets, and even those that are there have dealers in the mix, but slowly and surely there outfits such as Origin (a private placement platform aiming to simplify and automate the issuance of Medium Term Notes) and Overbond (a Canadian platform provider aiming to streamline bond issuance for both private- placements and public offerings) that are going to market with automation and auditing solutions today, but which may in the future choose to intermediate themselves more prominently with investors and issuers by offering data assets and related value added services.

There are, however, more worrisome developments for banks in the securities origination market. Investors are discovering new avenues for investing, for example equity crowdfunding, which may affect bank’s traditional business in private placements and investment banking payments. There are indications that institutional investors have already started to use crowdfunding websites to gain stakes in new businesses, such as in the real estate sector. One website, for example, enables crowdfunders to pool their capital and compete with institutional investors or co-invest with venture capital funds. There is a perceived threat that more buy-side firms will tap crowdfunding sites. Most worrisome for banks is perhaps the specter of alternate fund raising models whereby conventional IPO gives way to alternate intermediated mechanisms such as electronic auctions, crowdfunding and initial coin offerings (ICOs).

New capital raising models represent the latest in how disintermediation is playing out at the very heart of the capitalistic system. Equity crowdfunding has the potential to disintermediate early stage financing and democratizing access to capital and investment opportunities (in spite of the regulatory and reporting burdens imposed by the JOBS Act): witness the rise of such platforms as Kickstarter and Indiegogo. Equity crowdfunding has been thriving for a few years now, but it is Initial Coin Offering (ICOs) that has attracted investor attention of late.

In contrast to a traditional venture funding or IPOs where investors typically get a slice of the equity and control in the business, ICOs involve raising money by selling a cryptocurrency – called a “token” – at a discount to investors who can then trade the tokens on a crypto exchange. ICOs are different from traditional fundraising and even equity crowdfunding: they are unregulated, and startups do not have to fork out a slice of their equity. Indeed, ICOs have been on a tear: a quarter of the blockchain investment over the past couple of years (approx. $250 million) has come from ICOs, two recent ICOs (Qtum and OMG) passed the unicorn mark in a matter of mere months, and storied VCs are investing in ICO funded startups (Cryptocurrency Hedge Fund Polychain Raises $10 million). ICOs offer yet another fund raising avenue for startups that are skittish about handing over control to outsiders, those that do not want to go through the regulatory hassle of taking their company public, or those that have been neglected by the market. For the investors, ICOs provide the lure of massive returns (although they are extremely high risk and many are scams).

Capital markets’ “Uber moment” may not be around the corner, but the capitalistic middlemen are going to have to increasingly define what value they bring in light of disintermediated banking models being championed by fintechs. It is not just capital raising where fintech startups are mushrooming, but in other areas of the capital markets as well, for example clearing and settlement where blockchain-based startups are providing capabilities for fully automated settlement without the need for a central counterparty. Symbiont, a New York based blockchain startup has demonstrated how “smart securities” can streamline post trade processing and settlement. This has huge implications for middle/back office functions in banks, and the wider capital market ecosystem including custodians, clearing houses and depository institutions. This redundancy of centralized processing in the fintech utopia will be the theme of the next installment of the 5 D’s of Fintech series.

The Five D’s of Fintech: Disaggregation

FinTech

“Death by a thousand cuts”, “Sandblasting”, “Mastodons attacked by ants” and similar such metaphors have been used to describe the scourge of fintech and insurtech insurgents and their impact on incumbent banks and insurance companies. “Disaggregation” or “Unbundling” of products services lies behind fintech’s poaching of customers from banks, not singlehandedly but collectively, one product or service at a time. Stories and studies abound on how technology is unbundling not just products and services, but impacting entire value chains. This disaggregation is now well established in traditional banking (See CBInsight’s blog post as an example), but only now emerging in insurance. Disaggregation is the topic of this post, the second installment in the fintech series (see The Five D’s of Fintech: Introduction), in which we will look at where and how such disaggregation is taking place in the insurance industry.

The insurance industry is experiencing a technological double whammy of sorts: not only does technology enable creation and usage of new behavioral context which fuels new competition, but also it stands to threaten the underlying business economics of the industry. Insurance experts talk about “behavior disaggregation” to describe how consumer behaviors can be tracked and analyzed to engage with consumers directly in real-time, price risk accurately and discretely, and provide frictionless services. For example, It is not hard to imagine a “connected home” where various safety measures one might take e.g., installing a burglar alarm, are instantly used to recalibrate risk and thus adjust the homeowners or renters insurance. Tech savvy startups are already leveraging such behavior disaggregation: pay as you go car insurance, from Metromile is an example where driver behavior can be tracked to provide fit for purpose policies. Oscar, a health insurer in New York, gives all its policyholders a fitness tracker; whenever they hit a set goal (walking 10,000 steps in a day, say) they get a refund of a dollar. Where incumbent insurers put their consumers in a policy straightjacket based on blunt risk indicators such as age and occupation, insurtech players leverage technology to study behaviors down to the ‘last mile’ to offer highly flexible solutions, customized not just by the asset type insured but also by duration, for example through “micro duration” policies. These companies have the focus and the agility to build this specialization: like Metromile has done for car insurance, Trov has done for general merchandize, Ladder for life, and Hippo for home. Increasingly traditional insurers may find that insurtech’s win-win value propositions are hard to beat with traditional product bundles – hence the case for unbundling. But unbundling existing products is not enough. Unfortunately for traditional insurers, insurtech’s micro targeting of risk reduces incumbents’ existing risk and profit pools: they are thus forced to not just compete with the upstarts, but also seek out new revenue sources.

Insurtech is forcing disaggregation at an industry-level scale. Incumbents all have traditionally managed the entire value chain spanning product/service distribution, underwriting, claims management and investment/risk management. Using independent agency networks, conventional underwriting models and re-insurance for risk management, carriers have thrived in the marketplace. This value chain is now coming apart as each link in the chain is impacted by technology. New digital sources of distribution are threatening to disintermediate carriers from end consumers – just witness the rise of CoverHound for auto, PolicyGenius for life and disability, Abaris for annuities, and various others. Traditional risk pools are shrinking, risk is migrating from consumers to products, and the nature of risk is evolving thanks to self-driving cars, IoT technologies and the sharing economy — all this has led to emergence of new competitors offering alternate underwriting models such as Lemonade, Guevara, and Friendsurance. Upstarts such as Claimable seek to intermediate to provide easy claims settlement experience to end consumers. New arrangements such as Managing General Agents, catastrophe bonds and collaterized reinsurance are disaggregating the carrier/re-insurer relationship: now carriers can go directly to capital markets, and re-insurers can strike up business arrangements with startups focused on customer acquisition.   The neatly linked insurance value chain is slowly moving to horizontal stacks based structure (see BCG’s Philip Evans’ idea of stacks here).

Insurance is different from traditional financial services in that players in insurance, unless they are pure brokers, have to take on element of risk and hold associated capital, all of which comes with ton loads of regulatory requirements. Due to these reasons, insurtech has been slow to penetrate the $6 trillion insurance industry goliath. The pace however may accelerate. According to McKinsey, automation could leave up to 25 percent of the insurance industry’s current full-time positions consolidated or replaced over the next decade (see Automating the Insurance Industry). If nothing else, carriers should do everything in their power to prevent disintermediation, which will be the topic of the next installment in the fintech series.

The Five D’s of Fintech: Introduction

FinTech“Fintech” (a portmanteau of financial technology that refers to the disruptive application of technology to processes, products and business models in the financial services industry) is coming of age: two of the most prominent Fintechers, OnDeck and Lending Club, have gone public, many more are processing transactions to the order of billions of dollars, and outfits providing market intelligence in fintech are cropping up – there is even a newly minted index to track activity in marketplace lending. Banks are increasingly taking note of the Fintech movement, partnering with startups, investing in them or even acquiring them outright. Venture funding in fintech grew by 300% in one year to $12 billion in 2014.  According to the Goldman Sachs’s “Future of Finance” report, the total value of the market that can potentially be disrupted by Fintechers is an estimated $4.3 trillion.

Fintech is a complex market, spanning a broad swath of finance across individual and institutional markets and including market infrastructure providers as well. It is a broadly defined category for upstarts who have a different philosophy around how finance should function and how it should serve individuals and institutions. While some Fintechers seek to reduce transaction fees and improve customer experience, others exist to provide more visibility into the inner working of finance. In spite of this diversity, there are some common threads and recurring themes around why Fintech firms exist and what their market philosophy is. The 5 D’s of Fintech – Democratization, Disaggregation, Disintermediation, Decentralization and De-biasing – represent common themes around the mission, business models, values, and goals of many of these firms. In this series of posts on Fintech, we will look at each of the 5 D’s of Fintech, starting with Democratization — the mission of many a Fintech firm.

The Five D’s of Fintech

Fintech Slides

Democratization

Technology has long enabled democratized access to financial services, however Fintech is taking the movement to another level by targeting specific market niches with customized value propositions. A central appeal of many Fintechers is their promise to bring to the masses resources and capabilities which heretofore have been the preserve of the wealthy, elite, or the privileged. This has been made possible by both by market opportunity and internal capability: market opportunity of serving a market whitespace, and the ability to do so economically through the use of data and advanced technologies.

The financial inclusion that Fintechers are now enabling is driven by their ability to clear obstacles, remove barriers, and enable access where none existed before, whether it is serving the unserved or underserved SMBs that have typically been shunned by traditional banks (Funding Circle), providing credit to the underbanked segment lacking the traditional credit scores (Kreditech), enabling investment advice without the need to rely on expensive financial advisors (Nutmeg or Betterment), or facilitating access to the capital markets by offering low-cost brokerage services (Robinhood). Financial services are now “for the people” and “by the people” as well: Quantiacs, a fintech startup with the aim of revolutionizing the hedge fund industry, is essentially a market place for quantitative trading strategies that enables anyone to market their quantitative skills and trading strategies. Or OpenFolio, which is an online community that allows one to link portfolios and measure investment performance against their communities and relevant benchmarks. Wealth management perhaps is a market ripest for democratization as shown by rapid emergence of a raft of outfits such as HedgeCoVest and iBillionaire (platforms that allow investors to mirror the trades of hedge funds and billionaires, respectively), Loyal3 (which offers no fee access to IPOs), Algomi and True Potential (which undo trading obstacles for investors).

As Vikas Raj with Accion Venture Lab notes, the real potential of fintech is in democratizing access to finance for the billions of low-income unbanked population in the emerging markets. The high complexity and low scale nature of this market is exactly the kind Fintechers are good at capitalizing on, and this is evident from the long list of companies that is emerging in this market beyond Silicon Valley and New York. Where traditional finance and government agencies have failed, Fintech has the promise and the potential to excel.

Other industries can learn a lot by observing how Fintech is driving democratization in finance. Whether it is healthcare, education, media or government services, there is potential value in market segments that are currently un/under served which a Fintech like movement can unlock. Adopting the technologies underlying Fintech is part of the story; what is needed first is the recognition of the potential for change, the support from the markets, and an entrepreneurial spirit to lead the movement.

 

 

Paths to Power: What can Architects Learn from Sources of America’s Global Dominance

In discussing American exceptionalism and influence in the modern world, political scientists and academics talk about various sources of American power, for example US military might or its economic dominance.  Walter Russell Mead, an academic and a foreign policy analyst, defines four kinds of American power: the sharp power of the US military that provides the foundation, the soft power of American pop culture and the sticky power of an American-led economic world order that build atop the foundation, and finally at the very top the hegemonic power of the US, a prerogative and privilege to provide credence and confidence to all things worldly, whether they be economic coalitions or other global pursuits.  Organizational theorists and researchers have done extensive study of power and influence of individuals and groups within organizations, however the political scientists’ lens provides an interesting framework to understand how power should be built and exercised.  An example of such an application is looking at how the architecture function wields and exercises power with an IT organization.

Architects typically exercise their power and exert influence through formal governance processes and mechanisms – the infamous “architecture review board” is one of the various architecture forums that have given many a sleepless night to IT program managers.  Architecture governance can be characterized as the architects’ sharp power that is exercised by poking and prodding the IT teams to behave in a certain manner.  Just as a nation’s sharp military power serves as an important foundation for its power structure, so too does the architecture governance framework for the architecture group.  But, a foundation is not enough to create and sustain power.  US’s overwhelming military power is a powerful deterrent for many, but it is far from enough to ensure American dominance in world affairs.  And this has been demonstrated time and again by many IT organizations where in spite of having strong governance processes, the architecture teams fails to have appropriate influence in decision-making.  What is needed to complement the sharp power of governance are the architecture team’s sticky and soft powers.

America’s sticky power rests on two foundations: a dollar based global monetary system and free trade.  A dollar-based system not only allows America to reap direct economic benefits e.g., seigniorage from issuing bank notes to lenders all over the world, but it also allows America to exert the influence of its monetary and fiscal policies far and wide.  Globalization and fee trade, championed by American based multinationals, has created an economic framework in which benefits accrue to all countries that participate.  There are strong incentives for a country to participate in such an economic system, and strong disincentives to leave.  Architects have an opportunity to wield sticky power by creating a system in which their work creates benefits for their key stakeholders – architects should reorient their work to facilitate the outcomes and key decision-making IT management would like to have but cannot due to political, organizational or technical reasons.  Be it brokering talks, striking alliances, or negotiating trade-offs, architects in this role will need to function as first-class deal makers.  And in this role, architects will be seen belonging to an elite group that knows how to get things done. In an ideal state, IT managers will have no choice but to engage with the architects, because of the assurance that such an engagement will provide for the achievement of the outcomes they desire.  Achieving initial success will create a self-reinforcing system in which stakeholders will be increasingly willing to work with the architecture team and increasingly hesitant to work against them.  Sticky power, however, needs an inducing agent, something that will draw others in.  That is where soft power comes in.

American soft power emanates from America’s ability to successfully export its culture, values, ideals, and ideas to the rest of the world.  Architecture’s ability to successfully project its image across the entire organization is critical to garnering soft power.  The typical perception of architecture work being just technically focused is a result of architecture’s team failure to project an image commensurate with the stature of true architecture work.  Architects need to build a culture, set of ideas, principles and work habits that are unique and suited to the problem-solving, deal-making, and relationship and influence building that architecture work demands.  It starts with hiring the best and the brightest – not just the technically savvy, but true leaders, strategic thinkers and doers.  But creating such a culture is not enough – it needs to be packaged and “exported” to the rest of the organization, with the goal of creating an environment where the rest actually want to go join the architecture team because it has a certain cool factor.  On this scale, the typical architecture team fails miserably – one has yet to come across an IT organization where he architecture team is considered one of the cool teams to join.

Hegemonic power, the holy grail on the road to building power and influence, is power that comes from powers of all other kinds – it is the cherry on the top of the icing that allows America to set the agenda and rules for world affairs, or be the “gyroscope of the world”.  The architecture team is in a prime position to be the gyroscope of the IT organization.  By combining the power of governance with the ability to create real value for their stakeholders and attracting the best talent, the architecture team can influence decision-making at the highest levels – it can set the agenda to facilitate its own goals and outcomes and thus perpetuate its power and influence.   The nature of architecture work is such that sustaining power and influence are crucial to ensuring long term success of architects.  Maintaining power on an ongoing basis however takes effort, wise decision-making, and moving with the times – just witness the case of Britain, which only around a hundred years ago, was the world’s leading power by far and wide, but which was gradually supplanted by America as it held onto its stubborn positions and made fatal policy mistakes.

Testing in a Continuous Delivery Environment

In his book Out of the Crisis, W. Edwards Deming cautioned “Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place”. Ever since, ‘Building Quality In’ has become one of the central tenets of quality focused lean initiatives, including lean software development. The act of testing in software development is an example of inspection: inspection to find bugs and faults in the developed software; static code analysis is another example of inspection. Quality is important in the context of software development because software bugs cost both users and software providers dearly: a study conducted on behalf of the US National Institute of Standards and Technology estimated the cost of software bugs to the US economy to be around $60 billion. Perhaps the extent of this scourge is not surprising since in many organizations software testing is not effective: testing/QA teams run “quality gates” as an afterthought and even then testing does not necessarily translate into quality. When Agile came around, practitioners came up with new approaches to testing, aptly described under the banner of “Agile Testing”, that provided some improvement by driving more collaboration across teams and bringing testing up in the development cycle. Now with the advent of DevOps, testing specifically has taken on a new level of significance since continuous delivery is not just about delivering software rapidly, but software that works as well. A few have even coined a term for this new discipline: continuous testing. All that is well, but what does testing mean in a continuous integration/delivery environment?

blue devops_4

In a continuous delivery (CD) environment, quality becomes the responsibility of all. This does not mean that the QA and testing teams do not have a role to play in a CD environment. On the contrary, the QA and testing function moves into a strategic role, providing oversight, direction and leadership for diving overall quality. For example, instead of spending countless hours running manual tests, QA teams will invest resources and time to develop and implement a comprehensive test automation strategy, or they will spend effort putting in place governance processes, metrics and incentives to drive quality at every step. An example of how quality becomes everybody’s responsibility is what the development staff would do in such an environment. Development teams in a CD environment are empowered to take on quite a bit of testing upon themselves. In addition to a ‘test first’ approach, developers may also be required run pre commit testing that runs a suite of unit, component and integration tests. Indeed many CI servers provide the capability for ‘private builds’, which allows an individual developer to see if their code changes can be integrated into the main trunk for a successful build. Pre commit testing should enable developers to conduct a quick ‘smoke test’ to ensure that their work will not break the code in the main trunk. Therefore, pre commit testing may contain a selection of integration and acceptance tests. Once the developer checks in the code to the CI server after pre commit testing, the CI server runs the commit stage tests, which includes performing any static code analysis as required, component and integration testing, followed by system testing. Commit stage testing results are immediately fed back to the development team to get any errors or bugs addressed. Successful commit stage testing increases confidence in the build’s ability to be a candidate for acceptance testing. Builds failing commit stage testing do not progress to the next stage: the acceptance testing stage.

Acceptance testing is the domain of business analysts and business representatives assigned to the project team. However, this should not mean that development staff do not have any involvement in acceptance testing. Successful testing in a CD environment gives developers more ownership in driving quality by allowing them to conduct automated acceptance tests in their development environments. Common obstacles to enabling this, such as insufficient licenses and/or manual deployment and setup processes, need to be removed. Acceptance testing is a critical step in the deployment pipeline: a release is deemed acceptable for deployment only if it passes the acceptance test stage. The entire team should focus on fixing acceptance testing issues for a given release. A CD environment requires acceptance testing to be automated as much as possible: a fully automated acceptance testing suite enables the tests to be run for a build as when needed – this speeds up the development process and also enables creation of a powerful suite of regression tests that can be run over and over again. Some tools even offer capabilities to encode acceptance test criteria and to programmatically drive creation of acceptance testing based on those criteria: thus testing, and hence ultimately delivered software, can never be out of sync with evolving acceptance criteria and requirements.

If the system under development is a high performance system, some capacity and performance testing may become part of acceptance testing as well. Usually however capacity testing and testing for other ‘non-functional requirements’ is separate stage in a CD deployment pipeline. Although a CD environment requires such tests to be as automated as possible e.g. through the use of Recorded Interaction Templates and other devices, the success criteria for such tests is somewhat subjective – so although a release may fail automated capacity testing technically, it may still be greenlighted to go ahead based on human judgment. Ultimately, as the release completes the non-functional testing stage gate, it may then be put through more of the traditional manual testing. This is where human testers can excel and apply their expertise in UI testing, exploratory testing, and in creating unique testing conditions that automated testing may have not tested the software for. Manual testing effort thus is one of the last stages in the testing pipeline in a CD environment.

If testing is to indeed become ‘continuous’ in nature, there are several critical factors that need to be in place. Perhaps the most critical one is test automation, which is often times slammed by some practitioners to be difficult to do or non-value added. Whatever the reservations with automation, testing in a CD environment cannot possibly be efficient and effective without automation of testing – especially since testing is done in big numbers and it is done quite often. Automation is just one piece of the various test design and execution strategies to make testing execute efficiently and thus be successful in a CD environment. For example, CD practitioners recommend a commit testing stage lasting no more than 10 minutes – a hurdle that can be met only by adopting such strategies. Automation also applies to provisioning of and deployment to environments. ‘Push button’ deployments and provisioning of test environments is critical if developers are to conduct smoke acceptance tests to quickly test their work. Similarly, test data needs to be managed effectively. Test design and test isolation need to be such that data requirements for testing purposes are fit for purpose and parsimonious : wholesale replication of production data is neither feasible nor recommended in a CD environment. Data management, like environment management, needs to be fully automated with configurable design and push button techniques.

Testing has the opportunity to move from being a reactive purely static analysis function to being a proactive quality focused initiative. Achieving this requires making a number of tough decisions related to processes, division of responsibilities and organization of the testing effort. Making these tough decisions in a traditional environment is many times a choice. Moving to a CD environment however mandates those decisions to be made, which should be reason enough for organizations to start examining today how they can evolve and improve their testing efforts toward that ultimate model.