The Protocol Engineering Team recently presented a bird’s eye view of the Layer 2 Ecosystem in a Multichain Strategy and Roadmap for Maker, which led to ongoing discussion and community comments for a risk framework to assess these new ecosystems. This follow-on forum post can be broken down into three parts:
- Part 1: Scaling the Maker Protocol, Reiterates the three scenarios which can be thought of as sequential steps for scaling Maker across an evolving Layer 2 ecosystem.
- Part 2: L2 Risk Assessment Framework, Introduces a risk assessment framework that includes architectural and security considerations, as well as other non-technical implementation details that could potentially introduce risk to the protocol.
- Part 3: The Process and Actionable Steps, Walks the reader through the plan for applying the Risk Assessment Framework, including how we can adhere to the MIP structure, polls, and executive votes to onboard L2 platforms.
The purpose of this document is to set the scene for a framework that can help guide the community towards assessing different Layer 2 scaling solutions. Note: in this document we often refer to “L2”, the reader is advised to consider this as a reference to all potential scalability solutions, including; Rollups, Siderchains/Commit Chains and other stand-alone L1s.
As previously explained, there are three primary scenarios that can be considered as an approach for scaling the Maker Protocol;
- Simple bridging - in this scenario, DAI is bridged to other chains where it can be used in the DeFi ecosystems native to that chain. Each bridge will create its own version of DAI, similar to different BTC bridges to Ethereum. These different versions of DAI are not fungible with each other and have to be swapped - typically using AMM swap markets such as Curve. Such a bridge is not controlled nor deployed by MakerDAO. An example of such a bridge is e.g. the Polygon bridge.
- Advanced bridging - in this scenario MakerDAO controls and deploys the bridge which allows for deploying a specific version of the DAI token controlled by MakerDAO governance. Such a version of DAI is potentially more attractive to end users, because it can support much faster withdrawal times due to immediate L2 transaction finality. This can be a huge improvement for some contracts, e.g. Optimistic Rollups that suffer from a long withdrawal time, typically 7 days. More importantly, withdrawing funds through the bridge should be guaranteed to succeed for end users by Maker holders. Such a bridge is currently being developed by the PE Core Unit for Optimism.
- Native DAI issuance - in this scenario DAI is generated directly on a different chain using assets that are available on that chain as collateral. The ultimate goal is to make DAI generated on L2 fungible with DAI bridged from L1. This scenario is only possible if MakerDAO’s controlled “advanced bridge” is first deployed because minted DAI on L2 should be withdrawable to L1 even though there was no previous DAI deposited to the bridge.
Each of these scenarios introduce their own set of risks to MKR holders. Understanding these risks allow us to create a proper evaluation framework for different L2 and scalability solutions.
For simple bridging, DAI has to be generated first on Ethereum L1 before it can be used on L2. All risks associated with an L2 failure (including its total collapse) will directly affect only DAI users that move their DAI to L2s and/or users that purchase DAI directly on these L2s. The worst case scenario is a potential DAI liquidity crunch in case the particular L2 collapses, which would prevent DAI that is locked in the bridge from ever being retrieved. This scenario would be the equivalent to a large amount of DAI being lost, resulting in potential depegging due to limited DAI supply.
Another important consideration is that if the Simple Bridge is hacked, users holding L2 “DAI” will not be able to withdraw their L2 tokens to L1. Such an incident would likely depeg the L2 token because after such a hack, it would only be partially collateralized by the DAI locked in the L1 bridge. Again, due to the non-fungible quality of L2 DAI in this scenario, the risk is limited to the L2 chain and would not have flow-on effects to L1 DAI.
The Advanced Bridge (aka “Maker” bridge) guarantees that tokens moved to an L2 through this bridge will always be “withdrawable” to L1. If the bridge is hacked or becomes insolvent, L2 DAI will still be “withdrawable”, however, in this case, DAI will have to be minted on L1 to cover any loss. In this scenario, MakerDAO can be thought of as an insurer or underwriter guaranteeing L2 DAI as always redeemable for L1 DAI. This functionality is important for Native DAI Issuance on L2 because we want it to be fully fungible with DAI moved through the Maker Bridge and therefore it should always be withdrawable to L1.
Issuing DAI natively on L2 in addition to risks related with the L2 infrastructure and bridging, introduces additional risks similar to those when minting DAI on L1. For example, we must protect against DAI becoming insolvent, unbacked or losing its peg to the USD. This means that governance parameters such as stability fees, debt ceilings and collateralization ratios all need to be managed through L1 governance which is passed through to each respective L2 ecosystem.
When we consider bridging and minting DAI to other chains, we should consider the following main attack vectors:
- L1 DAI can be stolen from the bridge
- L1 DAI can be stuck in the bridge
- L2 DAI can be minted “out of thin air” on L2
- L2 DAI can become undercollateralized
It is worth noting that the current Proof-of-Work Consensus algorithm on base layer Ethereum guarantees that even though miners controlling 51% of the hashpower can censor individual transactions and perform a potential re-org of the chain, they cannot mine blocks containing invalid transactions (e.g. stealing DAI from users’ accounts or minting DAI out of thin air) as these will be rejected by honest minority miners and users running full nodes. Hence we consider minting L1 DAI “out of thin air” to be a risk that all DAI users and MKR holders accept with MakerDAO being deployed on Ethereum.
With that in mind we can broadly divide Ethereum scaling solutions into two categories:
- Solutions that will use L1 smart contracts to verify the validity of L2 transactions (through either Zero-Knowledge Validity Proofs or Fraud Proofs)
- Solutions that are independent from Ethereum consensus model (i.e. sidechains)
For scaling solutions in the second category, the security of assets is entirely dependent on the security of the L2 and it is possible that, e.g. the majority of L2 validators will mine invalid blocks in which they will mint L2 DAI “out of thin air” and then try to withdraw them to L1. This potentially has serious consequences for the Advanced bridge that we need to be aware of (and - consequently - minting DAI on L2). In the case of crypto-economic attacks on L2, validators can potentially mint unlimited amounts of DAI and attempt to move it through the bridge to L1. This can essentially break the entire Maker protocol reducing its security to L2 security causing irreparable damage. The need to assess L2 risks is therefore critical to overall protocol health.
|DAI Deposited in L1 Bridge is Stolen||Unbacked L2 DAI is minted||L2 DAI becomes undercollateralized|
|Simple Bridge||Bridge becomes insolvent, L2 “DAI” cannot be moved to L1 and may depeg||L1 DAI bridge can be drained, the bridge becomes insolvent, remaining L2 “DAI” cannot be moved to L1 and may depeg||n/a|
|Advanced Bridge||MKR holders will need to mint DAI so that L2 DAI can be moved to L1||L1 DAI bridge can be drained, bridge becomes insolvent, MKR holders will need to mint DAI so that L2 DAI can be moved to L1||n/a|
|Advanced Bridge + Native DAI issuance||MKR holders will need to mint DAI so that L2 DAI can be moved to L1||L1 DAI bridge can be drained, bridge becomes insolvent, MKR holders will need to mint DAI so that L2 DAI can be moved to L1||MKR will need to be minted to remove the bad debt|
We propose a simple risk assessment framework for each potential solution that would allow Maker governance to decide how and if MakerDAO should consider embracing it. The goal is to decide:
- Which architectural pattern should be considered? (None / Simple Bridge only / Advanced Maker Bridge / Advanced Bridge followed by Native DAI Issuance)
- If Advanced Bridge and Native DAI Issuance is considered, what are the technical risks and how should Maker governance set the risk parameters?
To properly address the above questions, each scaling solution should firstly be considered based upon its technical architecture and then upon broader implementation considerations. These are explored below.
Different L2 scaling solutions have different security considerations. Below we present the classification that is somewhat adapted from a comparison framework published by Alex Gluchowski.
The first step in an L2 solution evaluation is to classify it into the appropriate category, considering two major aspects:
- Is transaction data kept on-chain or off-chain or do users need to rely on additional trust assumptions regarding data availability
- Are state commitments published on L1 and if yes, is there a mechanism to ensure they are valid ? (if there is, it is either Validity proof or Fraud proof)
Based on the above criteria we can classify L2 solutions into the six categories listed in the above table.
It is important to understand the specific trade-offs of each category when considering scaling MakerDAO to any of the chosen L2 and the trust assumptions associated with each one of them. Broadly speaking, we can look at each category as:
- Solutions inheriting L1 security assumptions (Optimistic and ZKRollups)
- Solutions requiring additional trust assumptions related to data availability (Plasma, Validium)
- Solutions requiring additional trust assumptions for L2 validators (Sidechains, CommitChains)
If any solution requires additional trust assumptions beyond L1, these need to be carefully examined and understood by MKR holders if they decide to build an Advanced Bridge or consider minting DAI on these L2s. Specifically, the following questions needs to be asked:
- How easy is it to execute a data withholding attack and what are the potential consequences (for all systems except Rollups)?
- For Sidechains and Commit Chains
- What is the transaction finality, i.e. what is the depth of possible reorgs?
- Can users’ transactions be censored?
- How many validators can fork the chain with new consensus rules?
- What are the chain liveness assumptions?
When considering the particular solution, apart from its architecture, one needs to evaluate the actual implementation which may introduce additional risks. We propose that for each L2 system the risk profile is augmented based on the following criteria:
- Is the source code for the whole L2 fully available?
- Are all contracts deployed on L1 verified?
- How big is the development team?
- What are the credentials of individual team members, background and history of the project?
Community and Usage
- Is there a growing DeFi community with a lot of TVL, native tokens and other DeFi projects?
- Is there a vibrant and responsive community of L2 users?
- Is the L2 actively used?
- Are the smart contracts on L1 upgradable?
- If yes, is the owner allowed to do the upgrade via an EOA or MultiSig?
- If MultiSig, are the identities of the signers known?
- Is there a timelock allowing users an exit period before an upcoming upgrade?
- Details of Fraud Proofs (for Plasma and Optimistic Rollups).
- Details of Validity Proofs (for Validium and ZK-Rollups).
- Has a Fraud Proof ever been tested on mainnet?
- Details of forced exits - are these implemented? Have they ever been tried on mainnet?
- Details of transaction encryption/compression/encoding (for Rollups).
Architecture and Tooling
- Are Block Explorers available?
- Can users easily run independent Full Nodes?
- Are there public nodes available for clients (Infura, Alchemy or similar)?
- Is there The Graph, Google BigQuery, etc… support available?
- Is it possible to run a full archive node with tracing?
- Is there a public L2 API available?
- Are there libraries allowing for quick querying of L2 state?
- Is L2 EVM compatible?
- If yes, can the standard EVM tooling be used to develop L2 smart contracts?
- If no, what tooling is available? Is it possible to transpile Solidity code into the L2 native language?
Security & Audits
- Has the L2 been audited? If yes, what was the scope of the audit?
- Can L2 contracts be formally verified? If yes, what tools can be used to do it?
- How long has the L2 been running? Have there been any security incidents?
Below is a teaser of how the evaluation could look:
The following steps outline the proposed process for onboarding an L2.
We propose that the Risk Framework (MIP) be submitted upfront - essentially mirroring the way we onboard collateral. This should adhere to the above considerations and questions. Following submission and community discussion, a poll will be held, which if favourable will lead to Protocol Engineering, Risk and Oracle Teams completing Technical Evaluations, with an executive vote will follow. This simple process is illustrated below:
This process introduces important upfront questions to ensure that our resources are correctly allocated. In further detail, this is expanded upon here:
- Risk Framework submission (MIP) - from L2 team/core unit or interested stakeholders, to include answers to the list of proposed questions.
- Community Discussion and presentation - it is important for community engagement as well as existing Core Units to help give feedback to those compiling the MIP submission to ensure that it is suitably complete, allowing for accurate community assessment.
- Community Poll - if successful, will give the core units the greenlight to proceed with further analysis.
- PE/Risk/Oracle Technical Evaluations - deeper dive analysis to complement the initial MIP.
- PE/Risk/Oracle Team Present & Discuss findings - to ensure a well rounded evaluation of the solution being presented before the community votes in the executive.
- Executive Vote - assuming all Technical Evaluations are satisfied, the executive vote will request community approval to pursue the Layer 2 protocol in question.
- Development begins - this may take on various forms, including the core unit itself working on the proposal, or parallel development teams depending on the resources available or the prioritization of the initiative.
The above process, although fairly straightforward, introduces one meaningful variation in step 1 - it is possible that a dedicated development team may wish to onboard itself as a core unit to do targeted L2 development. To give a real example - leading up to and during EthCC, the Protocol Engineering Core Unit met with StarkWare to discuss creating an independent L2 Core Unit to focus on StarkNet (using ZK-Rollups) deployment.
Such a proposal would proceed through the existing MIP CU onboarding framework, which if successful, would then involve ongoing engagement with existing Core Units to complete the risk framework. This work would be fully independent and decentralized to the new Core Unit. The high level process is loosely illustrated below (note there will be community polls as per usual process, omitted here for simplicity):
As the community is aware, Optimism and Arbitrum development is well underway. For completeness, Protocol Engineering will complete the Risk Framework for both of these as part of ongoing work.
1. Complete Risk Framework Questions: Protocol Engineering will work with the community to finalise the scope of the risk framework, making sure nothing has been omitted (Proposed Risk Framework).
2. Finalise Risk Framework Process: Protocol Engineering and GovAlpha to determine whether the Risk Framework needs to be formalised into the MIP framework or not.
3. Continue with Optimism and Arbitrum: Including development and completion of the Risk Framework.
4. Liaise with the community and other L2 providers: Offer guidance to help them complete Risk Framework assessments.
Shoutout to @bartek for the many discussions leading to a such a comprehensive framework. We hope this framework creates a formal and transparent mechanism for onboarding more L2 protocols and look forward to talking to the community about the scope of the framework and the structure that we have proposed.