The economics of NFTs are interesting. Liquidity tends to gather in one place, thus the centralized marketplaces. But why the centralized storage? Why do so many people buy NFTs that can disappear at any moment if the host takes them offline?
While storage could totally become decentralized with IPFS, ARweave, and eventually MaidSAFE, today it’s still just easier to start with a regular web host, especially when the tokens aren’t ready to be “frozen” yet, and the collection is still being dynaically revealed. Thus we have the situation affecting the vast majority of NFT tokens to date:
What happens if the site goes offline? FileCoin for IPFS and BTT token for BitTorrent are there for people to pay for storage. Who currently pays, however, is the party publishing / storing the information. The way it is done in Ethereum is similar: smart contracts pay to store data using the SSTORE instruction, which has recently skyrocketed in price.
This is backwards: parties storing data have no idea how about the levels of demand years into the future. While it would be nice to imagine we can find the area by taking the integral under the “demand curve relative to time” from “now” and 1000 years into the future, the truth is we just don’t know the shape of the curve. And even if we did, any system that decentralizes responsibility for storage would need to gradually remove the reliance on one centralized party to pay for that storage, thereby eroding their “private ownership” over that content.
In decentralized systems, the people who want to access the data in the moment should be the ones paying. But currently, this happens indirectly, with authors prepaying for hosting or to “pin” things on ARweave or IPFS for years in advance and then trying to “charge” the people to access data, either by subscriptions, micropayments, etc.
What are their tools to restrict copying? Either copyright (via state enforcement / threats of consequences) or digital rights management (via software required to be installed in all personal devices) or smart contracts (via blockchain enforcement / marking one specific address as the “official, authorized” one). Either way, the data physically can be copied, but the hope to mitigate it is that either violence, controlling all manufacturing of personal devices, or simply hoping the “officialness” is worth more than the content itself for the gains.
All this introduces artificial scarcity into a system that inherently has none. Albert Wenger, a venture capitalist and general partner at Union Square Ventures (investors in everything from Twitter to CryptoKitties) has written a book called World After Capital, where he points out that the marginal cost of making a copy of a file is nearly zero. The book can be found and freely downloaded at worldaftercapital.com
Indeed, the open, permissionless digital content has created far more wealth for the world than its closed, proprietary counterparts. Consider:
- Science vs Alchemy
- Wikipedia vs Britannica
- Linux vs Windows
- WebKit vs Internet Explorer
(This is also true of open protocols like HTTP / the Web vs America Online / Minitel, or VOIP vs the telecommunications companies).
Without artificial scarcity, more people can use and remix content and information, in many different ways (such as scientists). Quality control can be done via peer review and curation.
Competition and Private Ownership vs Collaboration and Gift Economies
What if we could more accurately model the costs and how to recoup them? We could reimagine copyright, etc. and the capitalist model whereby the venture capitalists / early investors make a substantial contribution up front, and then want the project to extract rents forever (restrictions on remix and reuse being enforced by copyright, DRM or some other artificially imposed scarcity). The Web3 economy proposed something like this: tokens, where the original authors get paid only once (nevermind royalties on secondary sales) while the new “owners” of books, NFTs etc. can resell their copies in secondary markets.
If we get away from the idea that all the investment must be done upfront in a COMPETITIVE model of private ownership (of ideas, enforced by patents, or of content / fictional characters or entire fictional universes, enforced by copyright), then we arrive at more COLLABORATIVE models, like in science or on wikipedia, where all the work is done by “many hands”, each of whom makes a small bugfix or contribution. Helpful contributions can be rewarded incrementally with coins, while bugs and vandalism can result in a precipitous loss of accumulated coins. This would require that both the accumulation and spending of coins must happen gradually (an innovation of Intercloud btw), rather than allowing contributors to spend all their earned coins at once, before they can be penalized for destructive actions.
We then start to get away from the idea hat some specific party, which made a big investment upfront, now owns copyright (especially for 100 years or perpetuity) and that it now must be enforced everywhere vis-a-vis the public, ie the consumer, who themselves may very well have become a collaborative producer, in some sort of remix, mashup, fanfic, etc. For example, 50 Shades of Gray started as an unauthorized fanfic of Twilight. All those movies that followed and all those cable ties sold for kinky encounters… what would the world have lost if this had been nipped in the bud?
Paying for Hosting
Actually, there are two major categories of costs:
- R&D, producing the digital content
- Infrastructure: storing and distributing the content
Tesla and Waymo spend money on self-driving AI software and datasets, but once version 3.4 outperforms version 3.3 with no downsides, then they can distribute it to all cars at once, making them all “smarter” and more efficient. This is how progress in science and technology has taken place. The “distribution” can be done via people and books (as in eras past) or computer networks, with perfect copying of information having become so cheap. It’s easy to verify a hash of a file or chunk of a file, or even ask around who is hosting it. That’s called Distributed Hash Tables (I met and spoke withthe inventor of Kademlia, still the top technique powering DHTs, years ago, when he agreed to advise Intercoin on DHTs).
The hosting cost should be paid at the moment of access, by the consumer of the content — not by the producer. This is true whether you’re reading a book (static data), watching a movie (streaming data), using a chatroom (real time streaming of collaboratively generated data). The tokens are paid to the hosting companies (or global marketplace of nodes/seeders like in IPFS / BitTorrent), which can then step up to host content, provide proof of hosting it, and earn micropayments for serving it. The tokens they earned should be then exchangeable in bulk for some digital currency that has high liquidity, such as ETH or QBUX.
The R&D cost can be recouped via cryptocurrency tokens, which is issued by the content creators. The uniqueness of the token is the main thing that is enforced — among creators — and it can be enforced by a network effect and the browser crossdomain security model rather than threats of violence. The network of all the tokens has to be large enough that it’s far more costly to fork it, than it is to simply pay the micropayments.
You can find out a lot more on the economics and technology of micropayments on the Web at the following URL, where you can also see it being compared to today’s restrictive models including paywalls, subscriptions, copyright enforcement etc:
Paying for Resources
In fact, we can flip the model allowing users to charge networks (like Facebook, etc.) for receiving notifications, get rewarded for inviting friends via their contactpicker, etc. Users would take these actions on their own trusted apps (user-agents) on their own trusted devices (trusted computing base). Thus, companies would actually pay users for their time, attention, for voting and participating, etc.
This is what we can bring about, if we flip the script and accurately model the costs and how to recoup them. It is somewhat analogous to what Intercoin does when it comes to the sources and sinks of the local money supply: rather than having bank underwriters create money as loans on the basis of guessing how a business will perform over the next 5-10 years as it pays back the loan to the bank (removing money from circulation), the community software distributes a UBI to everyone, and it is spent into circulation on the basis of a consumer actually choosing to spend it on something they need or want. In this case, money can be taken out of circulation by democratically chosen fiscal policies such as pigovian taxes on negative externalities (pollution, factory farming, non biodegradeable plastic, congestion, fossil fuels at the point of emission, corporations forcing parents to work overtime and neglect their family, etc.)
All this can eventually be done on-chain and enable our communities to govern themselves better and solve collective action problems with the right incentives. It is this scale of social impact that Qbix and Intercoin are building the infrastructure to achieve.