If you have been in crypto for almost a decade as most of us have, you will have noticed that the unfortunate trend of projects operating in the space claiming paradigm-shifting breakthroughs and novel technical solutions to heretofore unsolved problems spanning the realms of game theory, cryptography and distributed systems continues unabated.
Yes, crypto-fatigue is real and the constant stream of upstart competitors offering dubious claims regarding their unique approaches to scaling, security, UX, and crypto-economics can quickly turn even the most starry-eyed newcomer into a cynical and seasoned investor ready to discount the latest entry in the space as just yet another self-styled Ethereum-Killer. Ethereum Killers whose lofty claims are only made more improbable by a conspicuous lack of clearly stated tradeoffs or by implausible and reckless security assumptions.
While the constant barrage of questionable projects who tout their tech as the latest and greatest keep on appending increasingly meaningless version numbers (blockchain 2.0! 3rd Gen network!) shows no sign of slowing down, this shouldn’t preclude the discerning investor from spotting the proverbial diamond in the rough.
Dfinity is, in our opinion, certainly one such diamond.
The origins of Dfinity can be traced all the way back to 2016 when Dominic Williams and Timo Hanke set out to radically change how the internet operates after having witnessed the potential offered by unstoppable, distributed, and autonomous code in the form of smart contracts on the then-nascent Ethereum platform.
So, what is Dfinity?
We think the best way to start answering this question is to plainly tell you upfront what Dfinity ISN’T:
The crucial difference between Dfinity and all other blockchain projects lies in the fact that the protocol was engineered specifically to store massive amounts of data on-chain and to serve as a fast global storage and execution environment for code that can in some cases rival existing cloud computing providers, with no external services needed to access it.
It’s no secret that the current situation of the technology industry is untenable.
The status quo is a bunch of monolithic for-profit companies that shape and control every facet of what was once supposed to be a pluralistic, distributed, and egalitarian network model.
The tech arena is utterly dominated by the incumbents: Google, Amazon, Facebook, Apple, Twitter -their tendrils extend well beyond their actual web properties and infrastructure and into the realm of technical committees drafting interoperability standards, public opinion, and policy.
This is not what Sir Tim Berners Lee envisioned when he worked tirelessly to lay the foundations of what would become the Internet as we now know it.
Platform risk is not only real but ubiquitous and oftentimes impossible to avoid, with increasingly costly and deleterious consequences for businesses.
Walled gardens such as proprietary and exclusive software distribution channels in the form of monopolistic App Stores are the norm rather than the exception.
All of this has attracted a great deal of scrutiny from regulators in various countries and some have taken drastic measures to try and limit the power these behemoths are able to wield over the public discourse or to deter anti-competitive practices with very little success if any.
The appeal of a truly decentralized Internet computer is immense and immediately apparent and resonates with people the world over.
The Cambridge Analytica fiasco, the recent Twitter data breach that exposed powerful internal administrative tools and led to the takeover of prominent accounts on an unprecedented scale is just a reminder of how centralization of data can have serious social repercussions.
People in the know are also acutely aware of another often underestimated but crucial pain point that is plaguing our current version of the internet and that is the staggering amount of technical debt and the immense burden posed by the complexity of the current tech stack that underpins the internet.
Having to manage, secure, extend and improve an aging foundational layer that was never designed to properly support many of the use cases that have now come to fruition is an endless and monstrous task that requires an exorbitant amount of skilled and knowledgeable caretakers.
The good news is that Dfinity is seeking to change all of this and we believe they have a real shot at making it a reality and are poised to succeed where previous efforts have failed. Here’s why.
It all starts with an exceedingly ambitious vision, creating a universal protocol to coordinate, deploy, replicate, validate, secure, store and execute code and data between a multitude of participants who supply computational resources and storage space to a global resource pool.
They have been toiling away in relative secrecy for almost half a decade now and have amassed an impressive amount of incredibly talented people at the forefront of their respective fields, some widely regarded as luminaries.
These are the kind of distinguished system engineers, cryptographers, security researchers, and programmers that you can’t simply entice to work on a mediocre project with a large paycheck.
These people are drawn to technical excellence and a cohesive and daring vision like moths to a flame and their willingness to be associated with a particular project is the best indicator of its legitimacy. There is simply no way that many illustrious academics would risk tarnishing their carefully cultivated reputation and standing in academia by being involved in a sham.
We will just mention a couple of key people in technical and operational roles but we highly encourage you to peruse the team page on the Dfinity website as it’s a veritable who’s who of technologists.
Jan Camenisch is a remarkable and prolific cryptographer who has authored over the last 3 decades countless and widely cited research papers in the cryptography field. Dfinity poached him from his previous employer, IBM, where he spent 20 years as Principal Research Staff Member.
Ben Lynn is another superstar name in the cryptography world and one of very few people who can claim the indisputable honor of having the initial of his last name immortalized as part of a novel cryptography scheme that he co-authored and that is seeing broad adoption in the crypto space, BLS.
You want to create a blazing-fast, scalable, and interoperable execution environment, who would you want to design it? Andreas Rossberg certainly fits the bill and would be pretty high on the (very short) list of people up to the task as one of the creators of the WebAssembly specification during his tenure at Google.
Honestly, with research centers located in Zurich, San Francisco, and Palo Alto, we could have dedicated a few more pages just to extol the virtues of the many incredibly talented people Dfinity has managed to band together; now, don’t worry we won’t bore you to death but if there’s one thing you should take away from this it’s the bonafide technical pedigree of their R&D and engineering teams.
We keep stressing this point because had they not managed to amass such a critical amount of talent and showed impressive progress we too would be skeptical of their ability to bring to fruition the moonshot that is the Internet Computer as they envision it.
Ethereum initially billed itself as the “World Computer”, and while the debate is still raging on whether it fully succeeded in living up to its name, no one can say that it wasn’t a great branding strategy. And since as we all know good artists copy and great artists steal, it was the next logical step for Dfinity to name their labor of love as the Internet Computer. It may be derivative but it’s an apt description of what they are trying to achieve.
The first pillar on which they intend to rebuild the internet is called “Canisters”. What is a Canister you say? Well, it’s an isolated, single-threaded, deterministic, cryptographically secure execution environment that is deployed, orchestrated and that can be interacted with over the Internet Computer Protocol. Sounds just like word salad? Then think of it as a Smart Contract on steroids. And yes we know it’s a trite comparison that has been used to death in a misleading way by all kinds of Ethereum wannabes to bolster interest but it really is the best approximation we can come up with while referencing existing crypto projects. If you are familiar with more traditional IT lingo you can think of a Canister as being similar to a Process.
It too executes code, but this is where the similarity ends as a Canister differs from a traditional process in a few key areas. First of all, a Canister can’t terminate due to invalid inputs, or errors caused by faulty logic because there is no way for it to abort/panic or otherwise stop. If a Canister crashes it will automatically revert to its previous state before it received the input that broke it. This is a nifty failsafe that would prevent a container from entering an inoperable state in many cases. Canisters are also replicated across all nodes spanning a subnet of the Internet Computer. They can be deployed, controlled, and decommissioned only by a user or by another container that has administrative privileges.
The technology that makes all of this possible is heavily dependent on WebAssembly and as a result container can benefit from one of WebAssembly greatest strengths, the ability to run code written in any one of the multitudes of programming languages that can be compiled into WebAssembly. Canisters can interoperate with each other even if they were written in different languages. WebAssembly also supports formal semantics, opening the way for a formally verified ecosystem to develop in the long term further increasing robustness, predictability, and security of code running on the Internet Computer.
This is where things take an interesting turn as the next revolutionary property of the Internet Computer is how it stores and retrieves data. Canisters have a memory limit of a couple of gigabytes and no disk storage of their own. So where do you store data? Well the ICP is tasked to keep state on behalf of Canisters and to abstract away the data storage part of the equation from the developers.
How exactly it achieves that is still a matter of speculation as few concrete details have been offered so far. We do know at a high-level view that it will work much like traditional Object Storage offered by legacy cloud computing platforms and canisters will be able to perform standard operations such as GET, PUT, APPEND, DELETE, LIST, STATUS, and so on.
This will be a massive boon to ease of use and will significantly improve the experience for developers. These standard functions all come together to form one of the first services to be previewed on the Internet Computer, BigMap, and will be accessible to developers through APIs. Keep in mind this data storage is highly resilient and accessible by all of the nodes forming a single subnet. This is absolutely unheard of in the blockchain space and is much more akin to an Amazon S3 instance. And yet it is distributed and cryptographically signed. The blockchain is slowly but surely becoming the Cloud and Dfinity is at the forefront of this shift that is blurring the line between traditional cloud architectures designed to be operated by a monolithic entity and distributed systems composed of independent data centers across the globe. But that’s not all as Dapps built on Dfinity will need powerful search capabilities to sift through all the data that they can now store. And this precise capability is offered by another fundamental building block called BigSearch, an indexing and search framework that operates much like Elasticsearch and is able to perform advanced searches using Stemming and other techniques to detect similar keywords and return appropriate matches.
The last piece of the puzzle that sits at the core of the Internet Computer Protocol is its on-chain governance system dubbed the Network Nervous System.
The Network Nervous System or NNS responsible for many critical tasks that are crucial to the health, security, and performance of the network such as dynamically allocating resources from participating data centers, performing routine operations needed to guarantee data availability, and ensuring the security and authenticity of data traveling across the network.
The NNS, much in the same way as its human namesake is formed by tens of thousands of Neurons. Neurons are single governance cells that come together to collectively perform certain governance actions on the ICP.
To create a Neuron you need to lock ICPs inside a timelock contract. But wait what are ICPs?
Yes as you may have correctly surmised when you started reading this article a few minutes ago the Internet Computer has its own native token used to reward network participants and ensure the security and honesty of network participants through economic incentives and penalties. In fact, it has not one but two tokens, ICPs, and Cycles. But let’s focus on the former for now.
ICP is primarily a governance token but unlike the vast majority of ERC20 based governance tokens used by DeFi projects living on Ethereum, this one has actual utility.
Their primary function as we said is to provide collateral value by being locked inside a Neuron to participate in the governance process. Neurons are not all born equal and many variables affect the voting power of each, such as the amount of value locked, the age of the Neuron defined as the amount of time it has been participating in the governance process, and a third parameter known as Dissolve Delay, a user-defined amount of time before the locked value in a neuron is returned to the user after he or she has invoked the Dissolve function to effectively stop the Neuron. The Dissolve Delay can be thought of as the crypto equivalent of a time deposit bank account when you are free to request a withdrawal at any time but your capital is subject to a previously agreed upon period of time before it is disbursed.
All things equal, a neuron with the same balance of locked ICPs as another but with a higher Dissolve Delay will have a proportionally higher voting power to reward the long-term commitment of the user that created it and ensure the stakeholders’ interests align with those of the network as a whole. Of course with higher responsibility come higher rewards and our neurons with a high Dissolve Delay will earn more participation rewards compared to a less invested neuron.
A brilliant way to incentivize stakeholders to create neurons is offered by the fact that the Dfinity team has announced that it will distribute all ICPs tokens to financial contributors pre locked into neurons who will have an already set age, encouraging backers to avoid immediately dissolving neurons to cash out their ICPS and instead of taking part in the governance process long term. We think this is a great way to pre-seed neurons and allows a strong community centered around the governance process to grow organically.
This brings us to the second use case for ICP tokens and that is their ability to be converted into Cycles.
Cycles are the other type of tokens powering the Internet Computer and are Dfinity’s counterpart to ETH in the sense that they are used to pay for computing resources such as network fees, CPU cycles, ram and storage used by Canister and ultimately applications as well as a way to prevent DDoS attacks by attaching a small monetary cost to TXs acting as a rate limiter and ensuring an attacker can’t effortlessly and freely spam the network.
An important aspect of the conversion of ICPs into Cycles is that the exchange rate is not fixed but dynamically adjusted by the Network Nervous System itself in response to external stimuli.
This peculiar token economics model allows for approximately 1 CHF(that’s a swiss franc) worth of ICPs to always be exchanged for a trillion cycles, called a T.
Canisters continually burn cycles in order to operate and since cycles can only be obtained by acquiring ICPs tokens this means that as long as the network sees sustained utilization the number of available cycles will be constantly decreasing. This is a nifty deflationary model!
Not only that but since the price of cycles is fixed this means that developers building applications on the Internet Computer can benefit from having stable and predictable costs to access computing resources and cycles act as a sort of stablecoin that can be used as a store of value. By tying the recurring computing costs and the operation of the internet computer as a whole to cycles as its sole payment method the Dfinity Foundation is able to ensure that if there is fluctuation in market prices for cycles these will quickly self-correct as market participants will rush in to scoop up cheap gas that is always in demand, therefore, stabilizing the price relative to the available supply.
Participating data centers who chose to make computing resource available for the Internet Computer to use will be paid in ICPs but since ICPs value is designed to be volatile in nature and DCs need to have a predictable revenue stream the amount of ICPs paid for a determined set of computing capacity made available will also be dynamically determined by the NNS to reach a pre-agreed value denominated in USD.
This remuneration scheme is very ingenious and should allow a robust token ecosystem and markets to develop and flourish while providing substantial economic rewards for early network participants.
The Dfinity network has undergone several iterations, the most recent being Sodium, released only 2 months ago. Sodium is still only a preview of the network but now with token economics fully baked in for developers who are looking to experiment and build the next great thing on the Internet Computer.
The public launch of the network in its nearly final incarnation is expected with the Mercury release in Q4 2020. We are eagerly awaiting this momentous occasion and will endeavor to snag up a hefty allocation of ICP tokens as soon as trading opens.