Categories
Oracles

How Ethereum 2.0 is Redefining Blockchain Security

How Ethereum 2.0 is Redefining Blockchain Security

So you may have heard some news recently about ETH 2.0’s long awaited and much bandied about official multi client testnet named Medalla crashing and burning only to re-emerge from the flames after a few days of downtime. The event didn’t leave them unscathed-far from it. Don’t listen to the noise, however, we’ll tell you why it’s the best thing that could have happened and why Ethereum 2.0 is on track to be the most decentralized and resilient blockchain yet thanks to a razor sharp focus on security.

What Happened With Medalla Anyway?

A majority of the Medalla beacon nodes were running the Prysm client developed by Prysmatic Labs. This set the stage for the actual issue having such an outsized impact and cascading effects and is a powerful reminder of the perils posed by a lack of client diversity and over representation of a single client in a blockchain network.

The ETH 2.0 beacon chain relies heavily on the assumption that all network participants have the same clock time to properly propose/validate blocks and perform other duties. Clock skew is a real issue and the Prysm developers had baked in a system that used Cloudflare’s Roughtime protocol to get digitally signed and reliable clock time to the benefit of users.

Unfortunately it turns out that Roughtime uses a pool of servers to determinate the time and if one of these servers misbehaves and reports an incorrect time that is heavily behind or in the future it gets averaged in and the resulting time is way off from the real time. A clear case where using a median instead of an average value would have almost completely mitigated the issue. This large clock skew trickled down to ETH 2.0 nodes running Prysm because the developers had designed the Roughtime sync so that it would automatically adjust the local system clock if it detected a deviation from the time reported by roughtime. This is obviously a bit haphazard and should only be performed if the discrepancy is minimal eg.: in the order of a minute or less. Since the roughtime servers were reporting a time more than 4 hours in the future, this got propagated to anyone running a prysm instance and effectively made all the prysm beacon nodes unable to work with the other clients. In the ensuing chaos and the mad scramble that followed to get everything back on track additional mistakes were made that led to validators being slashed and network participation dropping even further; in the end it took several days for the network to reach finality again. This catastrophic scenario was an invaluable opportunity for client teams to gather data as it was unfolding, to hone their incident response playbook and to uncover a lot of edge cases that just would not have emerged had the network not been in such a degraded and fragmented state. All clients suffered from massive resource usage spikes due to the amounts of different forks and the strenuous loads that they imposed. These conditions would have been almost impossible to create in a synthetic controlled environment. The Medalla incident led to a multitude of improvements in all the clients(and Prysm now only uses Roughtime as a time source to warn the users of possible clock skew but does NOT alter the system clock).If you want to learn more about this specific incident we highly recommend these very through and informative posts by Benjamin Edgington and the excellent Postmortem authored by the Prysmatic Labs team.

Client Diversity Is Paramount

It all starts with a simple yet unavoidable question: how do you actually design a blockchain to be as resilient and hardened as possible against a multitude of highly skilled adversaries and black swan events?

An immediate, binary choice with far reaching implications is how many client implementations of the protocol will be available. A single one or many.

Having a blockchain network comprised of more than 1 protocol implementation(a client) means that if a something goes horribly wrong with one client, be it an actively exploited bug in the wild or a legitimate code update that has unforeseen consequences the network does not necessarily come to a standstill thanks to the other unaffected clients.

This begs the question – why don’t blockchains have multiple clients? Well developing(and maintaining!) a blockchain client is a complex and time consuming undertaking that requires a diverse skillset. It demands a profound grasp of multidisciplinary subjects such as networking, security, cryptography, distributed systems, economics and more!

Now, for those of you looking for the fly in the ointment there is a notable downside: different client implementations which are often coded in different programming languages can have subtle divergences in how they interpret and implement the protocol spec. These differences can lead to consensus bugs. A consensus bug can be described as two (or more) clients not being able to agree on a certain thing.  In a blockchain context it can be a block, a transaction, or another object. This can have severe consequences on the health of the network and lead to forks, downtime and eventually when the dust settles even reverted transactions.

As everything else in the blockchain space, security and decentralization are a tough balancing act. There are various schools of thought on this but we believe monocultures to be dangerous and are firmly in the camp of people believing multiple clients to offer better overall security and resiliency in a network. Another benefit is that the ecosystem and ultimately users are not beholden to a single entity (the client developers) that is able to hold captive the network, exert disproportionate influence and dictate the overall direction of the project.

With that out of the way, let’s get back to Ethereum 2.0 and how it’s reshaping the blockchain security landscape. To start with it has FIVE (5!) clients under active development as of now. While we fully expect this number to decrease over time as users converge to the more mature and robust implementations, this is nonetheless an amazing evolutionary process. The clients that are better able to thrive in the real world will emerge as the winners, it’s comparable to natural selection and will ultimately greatly benefit the network. We think the inevitable consolidation phase will leave us with battle-hardened clients that will have withstood a barrage of adversarial conditions and more importantly dev teams who have an intimate understanding of the security landscape.

Audit All The Things

So we have client diversity covered, now what? The next logical step is ensuring that these clients are as secure, performant, stable and robust as possible. As is standard practice for complex, mission critical systems that are expected to hold a significant amount of value, audits are involved. All of the ETH 2.0 clients have or are in the process of, engaging 3rd party security auditors to perform a thorough review of their codebase, simulate adversarial scenarios, uncover potential bugs/security vulnerabilities and suggest appropriate fixes and mitigations.

Selecting and working with a company to audit your codebase is a deeply involved endeavor that requires tight collaboration and a strong commitment from the start of the process (usually a RfP, Request for Proposals) where you delineate the scope of the audit and solicit bids from various qualified security vendors to the end of the relationship, which usually concludes with an Audit Report and a request for comments.

It’s important to stress that an audit is not a fire and forget tool, but rather an ongoing undertaking and only one layer in a Defense in Depth approach to blockchain security.

It’s not only the ETH 2.0 clients that have undergone comprehensive security audits. The reputable Trail Of Bits firm has been engaged to assess the security of the CLI tool prospective stakers will use to generate the cryptographic keys used to control their validators. The audit uncovered 2 high severity issues, and suggested several less critical improvements but also noted the general high quality and maturity of the code reviewed.

Another encouraging sign showing that the EF is deeply conscious of all the critical moving parts that will have to interact together to bootstrap the beacon chain and allow validators to deposit ETH to perform staking duties is signaled by the fact that they went a step further than a simple audit for a piece of code that is arguably one of the most vital puzzle pieces In ETH 2.0: the Deposit Smart Contract.

This is a smart contract deployed on the Ethereum mainnet that will act as a one way bridge for people to transfer their ETH from the current eth 1.0 chain to the beacon chain to be staked. Runtime Verification has conducted a Formal Verification audit of this smart contract at the behest of the EF. What is a formal verification audit? The answer to that would necessitate a separate post to thoroughly explain but the gist of it is that it’s a process that allows to prove(or refute) the correctness of a specific algorithm in a mathematical way. This is one of the most rigorous and challenging vetting procedures available in the software realm and speaks volumes about the extraordinary dedication by the EF to shipping secure code.

We believe there are no silver bullets in the security field but gaining actionable insights form a high quality audit report compiled by a reputable firm is a great and necessary first step.

Bug Bounties and Attacknets

As befitting a project as ambitious and complex as Ethereum is, especially when securing billions in value, the EF is building a world class in-house dedicated security team whose sole focus will be ETH 2.0.

They have received numerous applications by highly qualified people and we have no doubt they will succeed in amassing a sizeable amount of InfoSec talent.

But as part of a multi pronged approach to security the EF has also spearheaded a Bug Bounty program covering the Phase 0 part of the ETH 2.0 spec. The pieces of the project that are considered in scope are well detailed and the bounties very generous ranging from a minimum of $1000 for bugs rated as low severity/no impact and going up to $20000 for critical bugs that have the potential to severely impact the network.

This is in addition to the rewards offered for successfully breaking the ad hoc testnet networks bootstrapped and maintained by the EF and cheekily and aptly named attacknets.

The Ethereum Foundation initially deployed multiple attacknets dubbed beta-0,  each formed solely of nodes running a single client in order to purposefully lower the overall security of the network and the barrier to entry for whitehat hackers and security professionals looking to probe and exploit specific client vulnerabilities.

These attacknets have since been deprecated and decommissioned but not before they fulfilled their role and yielded some successful exploits that mainly targeted the networking layer and were able to successfully prevent finality thus being eligible for bounties as summarized in the Trophies section.

The single-client beta-0 attacknets were retired in favor of a multi client attacknet dubbed beta-1. This attacknet is still operational and so far no one has successfully claimed a bounty on it so if you have a penchant for breaking things this could be a nice way to help ETH 2.0, gain lasting fame and net some cash.

Look for more parts of the spec and client implementations being covered under the program as the EF transition to a dedicated ETH2 bug bounty portal with a public leaderboard, following in the footprints of the pioneering initiative that has led to countless ETH1 security issues reported and fixed.

The bug bounties offered are reflective of the openness of the Ethereum ecosystem and the deep rooted commitment by the EF to fostering an open and collaborative environment that extends to all facets of the security footprint for the project. We think this approach is likely to pay off big time in the long term.

The Secret Weapon: Community Fuzzing

All of this brings us to the final and most disruptive initiative that has emerged from the security related efforts in the Ethereum 2.0 ecosystem.

The EF generously funded a grant to develop a comprehensive fuzzing framework targeting most of the Ethereum 2.0 clients. We consider this to be the magic arrow in the EF’s quiver.

But what exactly is fuzzing?

We’re glad you asked. Fuzzing is a process in which an automated program overwhelms a target piece of software with a deluge of random(and not so random) inputs in the hope of inducing a crash or unexpected behavior that could manifest itself in the real world when certain conditions arise. Still not sure you understand? Well this very vivid analogy courtesy of Afri Schoedon comes to the rescue:

“Imagine you have an unlimited number of kids of all ages asking you seemingly random questions non-stop. The moment you have a mental breakdown, the psychologist will write down the question that caused it and try to repair you, so you will withstand it next time. “

Makes it a lot clearer right?

Having a fuzzing framework to uncover bugs that would be basically impossible for a human to detect complements nicely the already well rounded approach to a robust security posture in ETH 2.0 but what is absolutely unprecedented is the revolutionary way in which the EF decided to engage stakeholders as part of the process.

Traditionally, fuzzing is a process generally performed by security firms when contracted by a client or done in house by large entities that have a dedicated security team and that are well versed in shipping secure code as it is fairly onerous and time consuming. In most cases it is closely guarded and while the tools and techniques are sometimes open sourced they are very rarely disseminated with custom tailored code to target a specific software/ property because of the high potential for abuse by attackers to use them for nefarious purposes eg.: finding a security vulnerability and exploiting it instead of reporting it.

The Sigma Prime devs took a very different approach in publishing the Beacon-Fuzz tool that is opposite to the security-through-obscurity credo that is often ingrained and embraced by a lot of fortune 500 companies and even by some less open minded blockchain projects.

Not only did they release the tool in the public domain under a very permissive license but they also pre populated it with corpora(a set of pre defined inputs upon which to perform mutations) designed to kickstart the fuzzing of existing ETH 2.0 clients.

They are actively encouraging members of the Ethereum community to run the tools, going as far as providing assistance in setting up the local fuzzing environment and troubleshoot issues(you can reach them on the #fuzzing channel in their discord using this link should you want to join the fuzzing ranks)

This effort is notable because their reasoning is that the benefits gained by having a diverse set of stakeholders run the fuzzing software far outweighs the risk of an attacker using the tool to uncover and exploit bugs in the clients, especially now that the network has not yet reached Mainnet status and is not securing real value.

So far their intuition has been proven largely correct as there was an enthusiastic response from the community at large and some dedicated community members who recognized the value provided started using the tool and eventually managed to find bugs affecting quite a few clients as detailed in this blog post.

The amount of stakeholders who have a significant vested interest in ensuring a successful launch of the beacon chain, its ongoing security and a pristine uptime thanks to the economic incentives that underpin the network(namely the staking rewards for proposing and validating blocks and the desire to avoid the stiff penalties that could result from security incidents) means there is a big and ever growing pool of users who are highly motivated to run the tool.

The Sigma Prime crew is not resting on their laurels though, and have recently added new capabilities to the fuzzing framework, extending its already powerful features to not only find bugs affecting a single client but also comparing how each client performs the various State Transitions in order to find discrepancies from the canonical reference spec and potential consensus issues between the different implementations!

They are also in the process of adding more fuzzing targets covering the portion of the codebase that is handling networking in the various clients. A more in depth look at how structural differential fuzzing operates and a sneak peak of their exciting roadmap can be gleaned by visiting their blog.

Even once the whole attack surface of the various endpoints will have been exhaustively covered the tool will still prove useful to detect potential bugs that could be introduced as part of successive modifications to the protocol and client updates.

Based on our interactions with the Sigma Prime folks and other clients devs we have gained a deep respect for their steadfast commitment in strictly adhering to security best practices and their resolve to continually assess, challenge and improve the security posture of the ecosystem as a whole.

The teams have some amazingly talented sec leads working in an incredibly collaborative environment and projects such as Beacon-Fuzz are testament to this. They will have a lasting beneficial impact as they are refined and maintained well past phase 0.

To conclude, we believe that Ethereum 2.0 has a great shot at shipping a highly resilient and hardened network with a best-in-class overall security posture thanks to the strenuous efforts by multiple teams and the amount of expertise they bring to the table. Other projects in the space will be hard pressed to replicate these enviable achievements, pervasive security culture and virtuous cycle brought to bear by its massive, energetic community and the grassroot movement involved in the fuzzing efforts. It’s a tall ask for any project but especially difficult to mimic for the multitude of self-styled VC-funded “Ethereum killers” that have been kept in carefully controlled and monitored incubators since their inception and don’t have a set of well established and documented protocols and guidelines to respond to security SNAFUs. Ethereum has weathered countless attacks and the expertise and insight gained by the battle hardened client developers is invaluable here and can’t be understated.

Does this mean bugs won’t happen come mainnet time or that all of this is enough to ward off attackers and guarantee there will be no security incidents? Certainly not- but it highlights how uniquely positioned the ETH 2.0 project is to timely and effectively respond to such issues if and when they will arise. You’d be a fool to bet against ETH 2.0 and you won’t find us on the sell-side of the order book or among the ranks of traders opening short positions anytime soon.

Halo out.

Back home

 

Categories
Oracles

API3 – All Web 2.0 Onchain

API3 – All Web 2.0 Onchain

Part 1 – The Technology

Preamble

What is a blockchain oracle?
What does it do?

Why is it so important?

An oracle is a method for bringing data from the world outside the blockchain onto it, from ‘meat-space’ as it’s been referred to. This commonly involves facilitating the interaction of Web 2.0 APIs with Smart Contracts to expand the capabilities and possibilities of blockchains.

This sector of the space has been developing for a few years now, and notable examples include Chainlink and Band Protocol, to name but a few.

In many ways, this mirrors an ancient expression of humanity’s search for meaning and purpose from beyond the physical realm.

Am I going off on a wild tangent here? Not really.

API3 seeks, like other projects, to bring data from one realm – the space outside a live blockchain into another realm – the onchain realm (meat-space<—>onchain); it seeks to act as a binding conduit for data between those realms so that data from one realm can interact meaningfully with another.

Allegorically speaking, this mirrors humanity’s constant effort throughout human history to bridge an informational connection with divine, non-physical or metaphysical realms, by means of revelations, prophecies or oracles. This satisfied the urge to derive meaning from another informational realm (the non-physical, non-corporeal), to predict the required actions to achieve a desired outcome, and to gain advantage to achieve desired goals.

A prime example of this would be the Pythian Oracle at Delphi in Ancient Greece who was consulted by the fleet that set out for Troy in Homer’s Iliad – by the Athenians and Spartans before they engaged the Persians – and by Alexander the Great.

We can be grateful that nowadays we have more precise means of bridging the gap between realms of information from which we can gain advantage than were available in those times.

Software is also easier to deal with than chicken entrails, or the flights of birds were in classical Rome, for example.

Introduction

So then, what can we say of API3, the newest project to build in this infrastructure space, bringing non-blockchain data into the realm of onchain smart contracts?

The first thing would be to define the unique elements that define this project, those being the provision of ‘first-party’ oracles (API3’s Airnode) for the first time in this space, and also a uniquely configured governance structure combined with superbly aligned incentives.

Airnode

API3 seeks to bring data onchain via APIs to maximise its utility when interacting with smart contracts, but one crucial difference is the fact that unlike other projects in this space, the data providers are the ones that directly control their own feeds and handle data requests from those feeds – via API3’s ‘Airnode’ technology.

Text in bold italics and enclosed in quotes is taken directly from the whitepaper (in the article as a whole) :-
https://raw.githubusercontent.com/api3dao/api3-whitepaper/master/api3-whitepaper.pdf

First-party oracles are integral to the API3 solution. This means each API is served by an oracle that is operated by the entity that owns the API, rather than a third-party.

In fact, this whitepaper is so well-written, it’s quite difficult to paraphrase and parse from, but I will try.

Having oracles operated by the API data-feed providers themselves is a first in this space, and opens up all kinds of interesting possibilities which weren’t possible before.

In light of this, I’m going to try to avoid direct comparisons with other projects because I don’t think they would be entirely fair to either those projects or to API3.

Security – Aspects of this change when using 1st-party oracles, for example the API providers would be signing the data onchain using their own private keys, as well as the data being private by default; no middleman nodes would be able to ‘see’ or parse the raw data itself. This would obscure and thus de-incentivize its unauthorised resale, which is a potential existent problem with 3rd party oracle systems based on the utilisation of middleman nodes. Additionally, any 3rd-party attack surface is removed.

Redundancy – Since the data providers also control the oracles, they are provided by the original secure source of the data with no mediation and so less nodal redundancy is needed, which reduces costs, network latency and fees. An API provider would be more than sufficiently served by simply placing mirrors in each geographic area for verification. This redundancy can be created within the Airnode itself, and the friction-less nature of the interaction between the oracle feed and the smart contracts mitigates any need to provide any sandbox environment (set-and-forget).

Costs and Revenue – Fewer nodes means lower operational costs, and less network latency means lower gas fees. The design aspects of Airnodes remove a lot of the bottle-necking issues encountered by sequential threading of requests (see below – the multi-wallet aspect). Since no middlemen are involved, revenue is usage-based and accrues directly to the API data-feed provider; this is much fairer since they are the ones providing the managed service and the data itself.

The businesses that provide these data-feeds on Web2 are accustomed to a pricing/revenue model based on being fixed, competitive, recurring and usage-based, which is extremely difficult with the 3rd-party oracle model due to the increased overheads of node-based middlemen and additional redundancy. Since costs are lower, more manageable and incurred by the requester this can now be applied in the API3 service for them (0.10c per request/$100 p.m. sub etc), making it easier to integrate and more attractive as regards integration with existing business models. The synergy of requiring little, if any, retooling in terms of staff and resources with blockchain skills (set-and-forget) reinforces this in terms of potentially very high growth of the platform.

Transparency –

Since the oracles are operated directly by the feed providers, their information is directly visible, something not possible with large numbers of redundant middleman nodes. Data providers’ identities are also visible and verifiable. There is no need for workarounds like off-chain signing, although it is perfectly feasible with the Airnode tech, which reduces overheads and drives ecosystem growth.

Set-and-forget – This server-less oracle service is easy to configure with a little help, and new ones can be implemented daily. The plan to accelerate rollout, though, is to semi-automate and scale the on-boarding of new oracle feeds by building a GUI toolset:-

“Borrowing from the OpenAPI Specification format, Oracle Integration Specifications (OIS) define the operations of an API, the endpoints of an oracle, and how the two map to each other. An Airnode user will be able to serve an API over their oracle simply by providing its OIS to their node. Integrations made in this standardized format will be very easy to collect, version and distribute.”

Additional plans in this area include a ‘node dashboard’ for feed providers to monitor their revenue and nodes etc, and a marketplace listing all available providers, what they provide, fees and endpoints.

This promises to help API3 to scale the ecosystem rapidly with a fast-growing and large base of 1st party oracles to compose dAPIs from.

Usage Model Patterns – These will encompass existing ones as well as future ones which were previously difficult to offer via 3rd-party architecture:-

  • Request-Response.
    This is the existing model most commonly seen in the space; oracle feeds are provided, users request data from them, and pay the requisite fees on gaining the requested response.
  • Publish-Subscribe.
    This involves always-on/available feeds that are served to subscribers to the service at a fixed cost and guaranteed availability.
    DEX’s are an excellent example of a market that would like to see this as an offering. This is planned and very achievable. Examples would be where the customer could receive predefined callbacks based on detailed and granular preset parameters (e.g. $ETH at $400 on Kraken, or $BTC at $11,851 on Binance) which trigger predetermined desired liquidity events when the conditions are met.

The use-cases for both patterns, and for future patterns, ae only limited by the imagination and creativity of the people who wish to use them, combined with the quality and the nature of the data being provided.

The 800 Pound Gorilla in the room – sequential threading:

I’ve seen quite a few people, including very skilled and technical ones, who initially couldn’t work out how API3 could handle this problem. The issue is that the EVM handles requests sequentially and puts them in queued threads, and this is one of the reasons why scaling is a serious problem on ETH 1.0.

In fact, if the problem had been tackled from the expected direction of decomposing and recomposing the issue then this project could potentially have been crippled. But the approach used was truly a ‘Eureka!’ moment.

Burak Benligiray, API3’s CTO, outlines and teases it here:
https://medium.com/api3/the-gordian-knot-called-the-oracle-problem-e9731c55da13

The key to implementing 1st party oracles though was, in my opinion, not engaging with the sequential threading issue in the first place.

Simply put, what API3 does is to process requests in parallel.

A multi-wallet approach is used, where up to 2^256 wallets are generated within a node and the requester receives their own wallet to deal with the request – so each of this large number of wallets/request-handlers works in parallel.
The result? –

NO sequential queuing and NO queue-based bottlenecks, and a superb workaround for scaling.

This multi-HD wallet technology is most commonly used in exchanges to provide large amounts of wallets for each user, many of you have seen it in action when you look at your spot wallets on Binance, Coinbase or Kraken.

It’s not new technology, but this manner of application of it is, in API3, unique. As Burak stated in his medium piece, and I agree, it’s a great way of re-enacting Alexanders’ solving the problem of the Gordian Knot.

Lateral, inspired, out-of-the-box thinking.

In the second, and final, part of this series we will deal with the things that complement and reinforce API3’s unique technology:

Part 2 – All things DAO…Governance and Incentives.

See you then, Halo out.

Back home

Categories
Oracles

API3 and The Future of Oracles

API3 and The Future of Oracles

The last few years in crypto certainly have been interesting, haven’t they? We’ve seen the rise and fall of the ICO bubble, years of bearish sentiment, the near-permanent hiatus of the very site you’re reading now, and all the while we’ve had steady progress throughout. We’re not talking about the platitudes that CZ and company love to tout regarding BUIDL – yes, this has been happening, but on the surface level it would seem stagnant. After all, when prices are down over 90% across the board for months and then years, it really can seem like doomsday. In many cases it was – weak hands got washed out, lots of companies got sued, and many more went defunct. Yet despite all of this, innovations and development has truly continued on. And so it is that we come to API3 – what it’s building, why it’s doing it, and if we really do have the next Chainlink on our hands.

The Oracle Problem

The infamous oracle problem has been one of the largest and most well known issues facing smart contracts and blockchain development, and it’s been that way for years now. You have a smart contract on-chain with enforceable rules and functions, but they are really only useful with data that is available inside the Ethereum network itself.

You can’t make a contract on the price of gold if such an input has to come in from meatspace – and therein lies the oracle problem. Just how do you get this kind of data on-chain and in a decentralized manner? Moreover, how do you ensure that this data is verifiably true, and how do you secure against an attack against such a data source? Certainly this increases the attack surface of a product dependent on a) the smart contract and b) the oracle provider itself.

Since the heady days of crypto, we’ve tried to resolve this oracle problem in several different ways. The most circuitous of which include prediction markets, such as Augur or Gnosis, but the real money has always been with an oracle provider that can deliver this data in an anonymous way, without third party intervention, and doing so in a cost-effective manner.

Enter Chainlink.

Prosegue la scalata di Chainlink: dove può arrivare? – Valute Virtuali

It would be poor form to discuss oracles and current solutions without mentioning Chainlink and all of the progress they’ve brought to the crypto ecosystem. In fact, this very site was a strong proponent of Chainlink back when they had their ICO in 2017. Of course, investing in an ICO is one thing – money was easy to make back then. What shows your strength as an investor more, however, is the ability to hold an investment through a bear market and reap the rewards once Chainlink was truly appreciated for what it was:

 
 

Chainlink is great. It’s one of the only true oracle projects that delivers on its promises, has a vibrant community of holders (known as the LINKmarines) and is poised to be one of the eminent blue-chip crypto tokens to hold in the future.

…So why am I writing this article?

Because Chainlink has problems. Problems that API3 solves.

The API Problem

So, we’ve discussed what the oracle problem is. In reality, it’s really more of a problem that’s been created by thinking too small about how we actually want smart contracts to function on Ethereum. The goal was never really to solve the decentralization of nodes that deliver oracle data, or to overcomplicate things such that “anyone” can deliver this kind of data. Even talking about it here it is a tad complex, is it not?

Actually, we have a much simpler problem. Really what we want is the ability to hook onto off-chain data and use it in our contracts. Oracles as far as blockchain middleware is concerned have been compared to API’s of the web in the sense that they deliver this data to the consumer. Rather than thinking of oracles as an abstraction of APIs, why don’t we just apply the design philosophy of an API itself onto blockchain?

Wouldn’t it be cool if instead of making an oracle call that costs you three dollars (quite expensive in the long run), you could make an API call that delivers the same data?

Wouldn’t it be cool to know who is actually delivering that data, rather than having to trust an anonymous node?

Wouldn’t it be cool to avoid all of the attack surface that multiple node providers enable and simply deliver that sort of data in a seamless integration?

Exit Chainlink. Enter API3.

The API Solution

So how does API3 work and why are we so bullish on it? In short, it takes all of the value that Chainlink nodes are currently aggregating (you know, the ones who are only incentivized by that value) and delivers it to the providers of the data themselves. I mean this in a direct sense. You don’t need to have some intermediary set of Chainlink nodes to hook onto an API provider and transmit that data on-chain. Actually, you can just have the API provider themselves provide that data and reap the rewards. This solves several key problems that Chainlink will have to contend with over time, and why we think API3 is a very bullish product.

Firstly, you now have a reputation to uphold when you are directly providing API data to consumers. Report bad or wrong data on Chainlink? There’s a monetary repercussion, but that’s about it. No one really knows who controls the node that did the damage due to the anonymous nature. As the API provider yourself you have a direct investment in the verity of your data, which means you remove a lot of the “oracle bribing” situation you can get with Chainlink.

Which is a huge problem, which Chainlink has solved by the way – only, they’ve made that solution prohibitively expensive to do so. Chainlink accepts that oracles can be bribed, and part of its design to safeguard against that involves using multiple nodes to deliver the ground truth of whatever data it is that you’re calling up. Multiple nodes cost money. Lots of money.

 
 
API3 has a neat solution they call Airnode, which is deployable on chain and requires very little onboarding (which the team will help with themselves) from the part of the API provider. Once you set it up you can forget about it. There’s your data, live on chain and anyone can make a call and request it. No nodes required. No upkeep. No attack surface.

It’s elegant. Extremely elegant.

The Money

That’s what it’s all ultimately about, right? In the end, we need to ask ourselves what the actual advantages are for the data provider here, as well as the consumer. Aside from the aforementioned ease of onboarding (try getting any legacy company to set up a blockchain node) API3 is just… cheaper. It’s cheaper to set up, cheaper to manage, and cheaper to make oracle calls on-chain. Nearly every aspect of API3 has been built with the data consumer in mind, and we think this is one area where API3 just beats out the competition.

API3 is mostly focused on creating decentralized data feeds like feeds.chain.link that are composed of multiple Airnodes and decentrally and transparently governed. While you can call Airnodes directly and it’s as robust as calling the API directly, most DeFi runs on decentralized feeds and the ethos of the space is such that single-source oracles are seen as suboptimal (however these will be useful for things like prediction markets).

Of course, we try to be unbiased here – in our personal opinion, we still feel that Chainlink will provide a solid level of security with their node architecture. However, we feel that with the inclusion of the reputational element each API provider will now have, we think API3 offers an alternative that can plug in to existing systems and reduce gas costs – to the tune of 50% or more per call for very little downside.

Solid savings, solid returns.

Governance Hype

Every project these days has a governance mechanic built into their token and API3 is no different. There is value to be acquired here in dollars and cents when it comes to owning supply of API3. You get to vote in governance changes, fee structure updates, and channel many of those fees to you, the holder. For a burgeoning data marketplace we think this is extremely bullish and once again are willing to make the play that API3 is less of a bet on their specific project succeeding (it will) and more of a bet on blockchain itself becoming even larger and more mainstream.

There’s also a good staking mechanic involved with the token which provides rewards to those willing to put their tokens up and act as insurance against malfunctions/errors. We expect these to happen, but will be few and far between. They happen on other existing systems too, and we are glad that API3 is taking the initiative here and being realistic about things rather than pretending these problems don’t exist.

Plus, staked tokens is reduced supply, less sellers, you know the deal.

There’s lots more to API3 that you can check out from their whitepaper and website, so we urge any intelligent investor to go ahead and check it out. Make your own decisions on whether you see any value here (we do) and if you’re interested in their upcoming sale in October.

Token Metrics? They exist

API3 has some really great things going for it. They have a lot of pre-existing customers who are looking to use API3 right out of the gate. The team has worked very closely with Chainlink themselves for some time now, so they know exactly where the pain points are.

There’s no reason for us to include things like token metrics, sale numbers, and team sections as you are free to do your own research when it comes to this kind of thing. Nor do we really exist in that kind of market anymore – attractive terms and numbers are never enough these days to get real appreciation. This time around you need some actual value proposition and real long term view to make a splash in this market.

Is API3 the end of Chainlink? Are we bearish on Chainlink? Is this a hit piece on Chainlink? Certainly not. If you think so, remember that you’re accusing individuals who invested in Chainlink big time and have held it for years of…being bearish on Chainlink.

No, we think these two solutions are great for different things – it will ultimately be up to the data consumer if they want a cheaper hands-off solution or a more expensive, but more robust offering with Chainlink. There’s definitely room in the data market for more oracle options – and existing services can only be improved by quality projects in this sector.

We’re convinced API3 is one of them. And we’re investing. Heavily.

Watch this space.

Halo out.