Introduction
🚧 Valence Protocol architecture and developer documentation is still evolving rapidly. Portions of the toolchain have stabilized to build cross-chain vaults, and extending vaults with multi-party agreements. Send us a message on X if you'd like to get started!
Valence is a unified development environment that enables building trust-minimized cross-chain DeFi applications, called Valence Programs.
Valence Programs are:
- Easy to understand and quick to deploy: a program can be set up with a configuration file and no code.
- Extensible: if we don't yet support a DeFi integration out of the box, new integrations can be written in a matter of hours!
Example Use Case:
A DeFi protocol wants to bridge tokens to another chain and deposit them into a vault. After a certain date, it wants to unwind the position. While the position is active, it may also want to delegate the right to change vault parameters to a designated committee so long as the parameters are within a certain range. Without Valence Programs, the protocol would have two choices:
- Give the tokens to a multisig to execute actions on the protocol's behalf
- Write custom smart contracts and deploy them across multiple chains to handle the cross-chain token operations.
Valence Programs offer the DeFi protocol a third choice: rapidly configure and deploy a secure solution that meets its needs without trusting a multisig or writing complex smart contracts.
Valence Programs
There are two ways to execute Valence Programs.
-
On-chain Execution: Valence currently supports CosmWasm and EVM. SVM support coming soon. The rest of this section provides a high-level breakdown of the components that comprise a Valence Program using on-chain coprocessors.
-
Off-chain Execution via ZK Coprocessor: Early specifications exist for the Valence ZK coprocessor. We aim to move as much computation off-chain as possible since off-chain computation is a more scalable approach to building a cross-chain execution environment.
Unless explicitly mentioned, you may assume that documentation and examples in the remaining sections are written with on-chain execution in mind.
Domains
A domain is an environment in which the components that form a program (more on these later) can be instantiated (deployed).
Domains are defined by three properties:
- The chain: the blockchain's name e.g. Neutron, Osmosis, Ethereum mainnet.
- The execution environment: the environment under which programs (typically smart contracts) can be executed on that particular chain e.g. CosmWasm, EVM, SVM.
- The type of bridge used from the main domain to other domains e.g. Polytone over IBC, Hyperlane.
Within a particular ecosystem of blockchains (e.g. Cosmos), the Valence Protocol usually defines one specific domain as the main domain, on which some supporting infrastructure components are deployed. Think of it as the home base supporting the execution and operations of Valence Programs. This will be further clarified in the Authorizations & Processors section.
Below is a simplified representation of a program transferring tokens from a given input account on the Neutron domain, a CosmWasm-enabled smart contract platform secured by the Cosmos Hub, to a specified output account on the Osmosis domain, a well-known DeFi platform in the Cosmos ecosystem.
--- title: Valence Cross-Domain Program --- graph LR IA((Input Account)) OA((Output Account)) subgraph Neutron IA end subgraph Osmosis IA -- Transfer tokens --> OA end
Valence Accounts
Valence Programs usually perform operations on tokens accross multiple domains. To ensure that the funds remain safe throughout a program's execution, Valence Programs rely on a primitive called Valence Accounts. Additionally, Valence Accounts can also be used to store data that is not directly related to tokens.
In this section we will introduce all the different types of Valence Accounts and explain their purpose.
Base Accounts
A Valence Base Account is an escrow contract that can hold balances for various supported token types (e.g., in Cosmos ics-20
or cw-20
) and ensure that only a restricted set of operations can be performed on the held tokens.
Valence Base Accounts are created (i.e., instantiated) on a specific domain and bound to a specific Valence Program. Valence Programs will typically use multiple accounts during the program's lifecycle for different purposes. Valence Base Accounts are generic by nature; their use in forming a program is entirely up to the program's creator.
Using a simple token swap program as an example: the program receives an amount of Token A in an input account and will swap these Token A for Token B using a DEX on the same domain (e.g., Neutron). After the swap operation, the received amount of Token B will be temporarily held in a transfer account before being transfered to a final output account on another domain (e.g., Osmosis).
For this, the program will create the following accounts:
- A Valence Base Account is created on the Neutron domain to act as the Input account.
- A Valence Base Account is created on the Neutron domain to act as the Transfer account.
- A Valence Base Account is created on the Osmosis domain to act as the Output account.
--- title: Valence Token Swap Program --- graph LR IA((Input Account)) TA((Transfer Account)) OA((Output Account)) DEX subgraph Neutron IA -- Swap Token A --> DEX DEX -- Token B --> TA end subgraph Osmosis TA -- Transfer token B --> OA end
Note: this is a simplified representation.
Valence Base Accounts do not perform any operation by themselves on the held funds, the operations are performed by Valence Libraries.
Valence Storage Account
The Valence Storage Account is a type of Valence account that can store Valence Type data objects.
Like all other accounts, Storage Accounts follow the same pattern of approving and revoking authorized libraries from being able to post Valence Types into a given account.
While regular Valence (Base) accounts are meant for storage of fungible tokens, Valence Storage accounts are meant for storage of non-fungible objects.
API
Execute Methods
Storage Account is a simple component exposing the following execute methods:
#![allow(unused)] fn main() { pub enum ExecuteMsg { // Add library to approved list (only admin) ApproveLibrary { library: String }, // Remove library from approved list (only admin) RemoveLibrary { library: String }, // stores the given `ValenceType` variant under storage key `key` StoreValenceType { key: String, variant: ValenceType }, } }
Library approval and removal follow the same implementation as that of the fund accounts.
StoreValenceType
is the key method of this contract. It takes in a key of type String
, and its
associated value of type ValenceType
.
If StoreValenceType
is called by the owner or an approved library, it will persist the key-value
mapping in its state. Storage here works in an overriding manner, meaning that posting data
for a key that already exists will override its previous value and act as an update method.
Query Methods
Once a given type has been posted into the storage account using StoreValenceType
call, it becomes available
for querying.
Storage account exposes the following QueryMsg
:
#![allow(unused)] fn main() { pub enum QueryMsg { // Get list of approved libraries #[returns(Vec<String>)] ListApprovedLibraries {}, // Get Valence type variant from storage #[returns(ValenceType)] QueryValenceType { key: String }, } }
Interchain Accounts
A Valence Interchain Account is a contract that creates an ICS-27 Interchain Account over IBC on a different domain. It will then send protobuf messages to the ICA over IBC for them to be executed remotely. It's specifically designed to interact with other chains in the Cosmos ecosystem, and more in particular with chains that don't support smart contracts. To use this account contract, the remote chain must have ICA host functionality enabled and should have an allowlist that includes the messages being executed.
Valence Interchain Accounts are created (i.e., instantiated) on Neutron and bound to a specific Valence Program. Valence Programs will typically use these accounts to trigger remote execution of messages on other domains.
As a simple example, consider a Valence Program that needs to bridge USDC from Cosmos to Ethereum via the Noble Chain. Noble doesn't support CosmWasm or any other execution environment, so the Valence Program will use a Valence Interchain Account to first, create an ICA on Noble, and then send a message to the ICA to interact with the corresponding native module on Noble with the funds previously sent to the ICA.
For this, the program will create a Valence Interchain Account on the Neutron domain to create an ICA on the Noble domain:
--- title: Valence Interchain Account --- graph LR subgraph Neutron IA[Interchain Account] end subgraph Noble OA[Cosmos ICA] end IA -- "MsgDepositForBurn protobuf" --> OA
Valence Interchain Accounts do not perform any operation by themselves, the operations are performed by Valence Libraries.
API
Instantiation
Valence Interchain Accounts are instantiated with the following message:
#![allow(unused)] fn main() { pub struct InstantiateMsg { pub admin: String, // Initial owner of the contract pub approved_libraries: Vec<String>, pub remote_domain_information: RemoteDomainInfo, // Remote domain information required to register the ICA and send messages to it } pub struct RemoteDomainInfo { pub connection_id: String, pub ica_timeout_seconds: Uint64, // relative timeout in seconds after which the packet times out } }
In this message, the connection_id
of the remote domain and the timeout for the ICA messages are specified.
Execute Methods
#![allow(unused)] fn main() { pub enum ExecuteMsg { ApproveLibrary { library: String }, // Add library to approved list (only admin) RemoveLibrary { library: String }, // Remove library from approved list (only admin) ExecuteMsg { msgs: Vec<CosmosMsg> }, // Execute a list of Cosmos messages, useful to retrieve funds that were sent here by the owner for example. ExecuteIcaMsg { msgs: Vec<ProtobufAny> }, // Execute a protobuf message on the ICA RegisterIca {}, // Register the ICA on the remote chain } }
Library approval and removal follow the same implementation as that of the fund accounts.
ExecuteMsg
works in the same way as for the base account.
ExecuteIcaMsg
is a list of protobuf messages that will be sent to the ICA on the remote chain. Each message contains
the type_url
and the protobuf encoded bytes to be delivered.
RegisterIca
is a permissionless call that will register the ICA on the remote chain. This call requires the
Valence Interchain Account to not have another ICA created and open on the remote chain.
Query Methods
Interchain account exposes the following QueryMsg
:
#![allow(unused)] fn main() { pub enum QueryMsg { #[returns(Vec<String>)] ListApprovedLibraries {}, // Get list of approved libraries #[returns(IcaState)] IcaState {}, // Get the state of the ICA #[returns(RemoteDomainInfo)] RemoteDomainInfo {}, // Get the remote domain information } pub enum IcaState { NotCreated, // Not created yet Closed, // Was created but closed, so creation should be retriggered InProgress, // Creation is in progress, waiting for confirmation Created(IcaInformation), } pub struct IcaInformation { pub address: String, pub port_id: String, pub controller_connection_id: String, } }
There are two specific queries for the Valence Interchain Account. The first one is IcaState
which returns the state of the ICA. The second one is RemoteDomainInfo
which returns the remote domain information that was provided during instantiation.
ICAs can only be registered if the IcaState
is NotCreated
or Closed
.
Libraries and Functions
Valence Libraries contain the business logic that can be applied to the funds held by Valence Base Accounts. Most often, this logic is about performing operations on tokens, such as splitting, routing, or providing liquidity on a DEX. A Valence Base Account has to first approve (authorize) a Valence Library for it to perform operations on that account's balances. Valence Libraries expose Functions that it supports. Valence Programs can be composed of a more or less complex graph of Valence Base Accounts and Valence Libraries to form a more or less sophisticated cross-chain workflow. During the course of a Valence Program's execution, Functions are called by external parties that trigger the library's operations on the linked accounts.
A typical pattern for a Valence Library is to have one (or more) input account(s) and one (or more) output account(s). While many libraries implement this pattern, it is by no means a requirement.
Valence Libraries play a critical role in integrating Valence Programs with existing decentralized apps and services that can be found in many blockchain ecosystems (e.g., DEXes, liquid staking, etc.).
Now that we know accounts cannot perform any operations by themselves, we need to revisit the token swap program example (mentioned on the Base Accounts page) and bring Valence Libraries into the picture: the program receives an amount of Token A in an input account, and a Token Swap library exposes a swap function that, when called, will perform a swap operation of Token A held by the input account for Token B using a DEX on the same domain (e.g., Neutron), and transfer them to the transfer account. A Token Transfer library that exposes a transfer function will transfer the Token B amount (when the function is called) to a final output account on another domain (e.g. Osmosis). In this scenario, the DEX is an existing service found on the host domain (e.g. Astroport on Neutron), so it is not part of the Valence Protocol.
The program is then composed of the following accounts & libraries:
- A Valence Base Account is created on the Neutron domain to act as the input account.
- A Valence Base Account is created on the Neutron domain to act as the transfer account.
- A Token Swap Valence Library is created on the Neutron domain, authorized by the input account (to be able to act on the held Token A balance), and configured with the input account and transfer account as the respective input and output for the swap operation.
- A Token Transfer Valence Library is created on the Neutron domain, authorized by the transfer account (to be able to act on the held Token B balance), and configured with the transfer account and output account as the respective input and output for the swap operation.
- A Valence Base Account is created on the Osmosis domain to act as the output account.
--- title: Valence Token Swap Program --- graph LR FC[[Function call]] IA((Input Account)) TA((Transfer Account)) OA((Output Account)) TS((Token Swap Library)) TT((Token Transfer Library)) DEX subgraph Neutron FC -- 1/Swap --> TS TS -- Swap Token A --> IA IA -- Token A --> DEX DEX -- Token B --> TA FC -- 2/Transfer --> TT TT -- Transfer Token B --> TA end subgraph Osmosis TA -- Token B --> OA end
This example highlights the crucial role that Valence Libraries play for integrating Valence Programs with pre-existing decentralized apps and services.
However, one thing remains unclear in this example: how are Functions called? This is where Programs and Authorizations come into the picture.
Programs and Authorizations
A Valence Program is an instance of the Valence Protocol. It is a particular arrangement and configuration of Accounts and libraries across multiple domains (e.g., a POL (protocol-owned liquidity) lending relationship between two parties). Similarly to how a library exposes executable functions, programs are associated with a set of executable Subroutines.
A Subroutine is a vector of Functions. A Subroutine can call out to one or more Function(s) from a single library, or from different libraries. A Subroutine is limited to one execution domain (i.e., Subroutines cannot use functions from libraries instantiated on multiple domains).
A Subroutine can be:
- Non Atomic (e.g., Execute function one. If that succeeds, execute function two. If that succeeds, execute function three. And so on.)
- or Atomic (e.g., execute function one, function two, and function three. If any of them fail, then revert all steps.)
Valence Programs are typically used to implement complex cross-chain workflows that perform financial operations in a trust-minimized way. Because multiple parties may be involved in a Valence Program, the parties to a Valence Program may wish for limitations on what various parties are authorized to do.
To specify fine-grained controls over who can initiate the execution of a Subroutine, program creators use the Authorizations module.
The Authorizations module is a powerful and flexible system that supports access control configuration schemes, such as:
- Anyone can initiate execution of a Subroutine
- Only permissioned actors can initiate execution of a Subroutine
- Execution can only be initiated after a starting timestamp/block height
- Execution can only be initiated up to a certain timestamp/block height
- Authorizations are tokenized, which means they can be transferred by the holder or used in more sophisticated DeFi scenarios
- Authorizations can expire
- Authorizations can be enabled/disabled
- Authorizations can tightly constrain parameters (e.g., an authorization to execute a token transfer message can limit the execution to only supply the amount argument, not the denom or receiver in the transfer message)
To support the on-chain execution of Valence Programs, the Valence Protocol provides two important contracts: the Authorizations Contract and the Processor Contract.
The Authorizations Contract is the entry point for users. The user sends a set of messages to the Authorizations Contract and the label (id) of the authorization they want to execute. The Authorizations Contract then verifies that the sender is authorized and that the messages are valid, constructs a MessageBatch based on the subroutine, and passes this batch to the Processor Contract for execution. The authority to execute any Subroutine is tokenized so that these tokens can be transferred on-chain.
The Processor Contract receives a MessageBatch and executes the contained Messages in sequence. It does this by maintaining execution queues where the queue items are Subroutines. The processor exposes a Tick
message that allows anyone to trigger the processor, whereby the first batch of the queue is executed or moved to the back of the queue if it's not executable yet (e.g., retry period has not passed).
graph LR; User --> |Subroutine| Auth(Authorizations) Auth --> |Message Batch| P(Processor) P --> |Function 1| S1[Library 1] P --> |Function 2| S2[Library 2] P --> |Function N| S3[Library N]
WIP: Middleware
The Valence Middleware is a set of components that provide a unified interface for the Valence Type system.
At its core, middleware is made up from the following components.
Design goals
TODO: describe modifiable middleware, design goals and philosophy behind it
These means are achieved with three key components:
- brokers
- type registries
- Valence types
Middleware Brokers
Middleware brokers are responsible for managing the lifecycle of middleware instances and their associated types.
Middleware Type Registries
Middleware Type Registries are responsible for unifying a set of foreign types to be used in Valence Programs.
Valence Types
Valence Types are the canonical representations of various external domain implementations of some types.
Valence Asserter
Valence Asserter enables Valence Programs to assert specific predicates during runtime. This is useful for programs that wish to enable conditional execution of a given function as long as some predicate evaluates to true
.
Valence ZK coprocessor
⚠️ Note: Valence's ZK coprocessor is currently in specification stage and evolving rapidly. This document is shared to give partners a preview of our roadmap in the spirit of building in public.
The Valence ZK coprocessor is a universal DeFi execution engine. It allows developers to compose programs once and deploy them across multiple blockchains. Additionally, the coprocessor facilitates execution of arbitrary cross-chain messages with a focus on synchronizing state between domains. Using Valence, developers can:
- Build once, deploy everywhere. Write programs in Rust and settle on one or more EVM, Wasm, Move, or SVM chains.
- Avoid introducing additional trust assumptions. Only trust the consensus of the underlying chains you are building on.
While the actual execution is straightforward, the challenge lies in encoding state. The ZK program, as a pure function, must be able to to utilize existing state as arguments to produce an evaluated output state.
Initially, we can develop an efficient version of this coprocessor approximately at par with creating the state encoder. However, it is crucial to note that each chain will necessitate a separate encoder implementation. The initial version will necessitate users to deploy their custom verification keys along with the state mutation function within the target blockchain. Although the code required for this purpose will be minimal, users will still need to implement their own verification keys and state mutation functions.
Longer term, we plan to develop a decoder that will automate the state mutation process based on the output of the ZK commitment. For this initial version, users will be able to perform raw mutations directly, as the correctness of ZK proofs will ensure the validity of messages according to the implemented ZK circuit.
--- title: ZK coprocessor overview --- graph TB; %% Programs subgraph ZK coprocessor P1[zk program 1] P2[zk program 2] P3[zk program 3] end %% Chains C1[chain 1] C2[chain 2] C3[chain 3] P1 <--> C1 P2 <--> C2 P3 <--> C3
zkVM Primer
A zero-knowledge virtual machine (zkVM) is a zero-knowledge proof system that allows developers to prove the execution of arbitrary programs. In our case these programs are written in Rust. Given a Rust program that can be described as a pure functionf(x) = y
, one can prove the evaluation in the following way:
- Define
f
using normal Rust code and compile the function as an executable binary - With this executable binary, set up a proving key
pk
and verifying keyvk
- Generate a proof
p
thatf
was evaluated correctly given inputx
using the zkVM, by callingprove(pk, x)
- Now you can verify this proof
p
by callingverify(vk, x, y, p)
Building the Valence ZK coprocessor
Let's assume that we have Valence Base Accounts in each domain. These accounts implement a kv store.
Every ZK computation will follow the format of a pure state transition function; specifically, we input a state A
, apply the function f
to it, and produce the resulting state B
: f(A) = B
.
For the function f
, the chosen zkVM will generate a verifying key K
, which remains consistent across all state transition functions.
Encoding the account state: Unary Encoder
To ensure every state transition computed as a ZK proof by the coprocessor is a pure state transition function, we require a method to encode the entire account's state into initial and mutated forms, A
and B
, respectively, for use in providing the applicable state modifications for the target chain.
In essence, let's consider an account with its state containing a map that assigns a balance (u64 value) to each key. A contract execution transferring 100 tokens from key m
to n
can be achieved by invoking state.transfer(signature, m, n, 100)
. This on-chain transfer function may look something like this:
#![allow(unused)] fn main() { fn transfer(&mut self, signature: Signature, from: Address, to: Address, value: u64) { assert!(signature.verify(&from)); assert!(value > 0); let balance_from = self.get(&from).unwrap(); let balance_to = self.get(&from).unwrap_or(0); let balance_from = balance_from.checked_sub(value).unwrap(); let balance_to = balance_to.checked_add(value).unwrap(); self.insert(from, balance_from); self.insert(to, balance_to); } }
Here, the pre-transfer state is A
and after the transfer, the state is B
.
Let's write a new function called transfer_trusted
that leaves signature verification to the ZK coprocessor.
#![allow(unused)] fn main() { fn transfer_trusted(&mut self, from: Address, to: Address, value: u64) { let balance_from = self.get(&from).unwrap(); let balance_to = self.get(&to).unwrap_or(0); self.insert(from, balance_from - value); self.insert(to, balance_to + value); } }
In the ZK setting, we execute the transfer
function within the zkVM. We must input the encoded state the account and receive as output the encoded state of the mutated account.
#![allow(unused)] fn main() { fn program(mut state: State, encoder: Encoder, arguments: Arguments) -> Commitment { let (signature, from, to, value) = arguments; let initial = encoder.commitment(state); state.transfer(signature, from, to, value); let arguments = encoder.commitment(arugment) let finalized = encoder.commitment(state) let output = encoder.commitment(initial, arguments, finalized) encoder.commitment(initial, arguments, output) } }
Running this program within the zkVM, also allows us to generate a Proof
.
Upon receiving the (Proof, Commitment, Arguments)
data on the target chain, it can validate the execution correctness by verifying the proof and commitments, leveraging the ZK property that the proof will be valid if, and only if, the contract's execution was accurate for the given inputs, and the supplied commitments are those generated specifically for this proof.
#![allow(unused)] fn main() { fn verify(&self, proof: Proof, arguments: Arguments) { let current = self.state.commitment(); let args = arguments.commitment(); let (from, to, value) = arguments; self.transfer_trusted(from, to, value); let mutated = self.state.commitment(); let commitment = (current, args, mutated).commitment(); proof.verify(&self.vk, commitment); } }
By doing so, we switch from on-chain signature verification to computation over committed arguments, followed by ZK proof verification. Although we've presented a simplified example, the same verification process can accommodate any computation supported by a zkVM, enabling us to process multiple transfers in batches, perform intricate computation, and succinctly verify execution correctness. We refer to this pattern as a "Unary Encoder" because we compress the two states of the account, 'current' and 'mutated', into a single zero-knowledge proof.
The Unary Encoder will be the responsible for compressing any chain account state into a compatible commitment for the chosen zkVM (in our case a RISC-V zkVM). The encoding is a one-way function that allows anyone in possession of the pre-image, i.e. inputs to the encoding function, to reconstruct the commitment. This commitment will be transparent to the target chain, enabling use in construction of the block header for verification purposes.
Handling state transition dependencies across domains: Merkelized Encoder
Lets assume a hypothetical situation where we aim to achieve decoupled state updates across three distinct chains: chain 1
, chain 2
, and chain 3
. The objective is to generate a unified ZK proof that verifies the correctness of the state transitions on all chains.
Specifically, chain 3
will depend on a mutation from chain 1
, while chain 2
operates independently of the mutations on both chain 1
and chain 3
.
graph TB %% Root node r[R] %% Level 1 m1[M1] --> r m2[M2] --> r %% Level 2 c1[C1] --> m1 c2[C2] --> m1 c3[C3] --> m2 zero((0)) --> m2 %% Level 3 chain1["(S1 --> T1), K1"] -- chain 1 transition encoding --> c1 chain2["(S2 --> T2), K2"] -- chain 2 transition encoding --> c2 chain3["(S3 --> T3), K3"] -- chain 3 transition encoding --> c3
The Merkle Graph above depicts the state transition that can be compressed into a single commitment via Merkelization. Given an encoder with a specialized argument—a Sparse Merkle tree containing encoded state transition values indexed by the program's view key on the target blockchain—we obtain a Merkle Root denoted as R
.
The ZK coprocessor can execute proof computations either sequentially or in parallel. The parallel computation associated with C2
operates independently and generates a unary proof of S2 -> T2
. Conversely, the proof for C3
requires querying T1
.
Since chain 3
has a sequential execution, the coprocessor will first process C1
, then relay the pre-image of T1
to the coprocessor responsible for computing C3
. Due to the deterministic nature of unary encoding, the chain 3
coprocessor can easily derive T1
and validate its foreign state while concurrently processing C3
.
At this point, there is no justification given for Merkelizing the produced proofs, hashing the entire set of Merkle arguments would work as well. However, it's worth noting that chain 2
does not require knowledge of the data (S1, T1, K1, S3, T3, K3)
. Including such information in the verification arguments of chain 3
would unnecessarily burden its proving process. A Merkle tree is employed here for its logarithmic verification property: the condensed proof generated for chain 2
will only require a Merkle opening to R
, without requiring excess state data from other chains. Essentially, when generating the Merkelized proof, the chain 2
coprocessor, after computing C2
, will need only C1
and M2
, rather than all Merkle arguments.
Finally, each chain will receive R
, accompanied by its individual state transition arguments, and the Merkle path leading to R
will be proven inside of the circuit.
--- title: On-chain Proof Verification --- graph TD; coprocessor --(R1, T1)--> chain1 coprocessor --(R2, T2)--> chain2 coprocessor --(R3, T3, R1, T1, C2)--> chain3
In this diagram we see chain 3
will first verify(R3, T3)
, then verify(R1, T1)
, then it will query(T1)
, then compute C1 := encoding(S1, T1)
, then compute C3 := encoding(S3, T3)
, and finally will assert R == H(H(C1, C2), H(C3, 0))
.
Sparse Merkle tree
A sparse Merkle tree (SMT) is a specialized version of a Merkle tree, characterized by a leaf index defined by an injective function derived from a predefined argument at the design level. The verification key of a ZK circuit is another constant, also injective to the circuit's definition, and can serve as an index for the available programs.
In the context of a ZK proof being a product of its verification key (alongside other attributes), it allows us to index a proof from a collection of proofs for distinct programs.
Assuming that we don't reuse the same proof for different purposes during a state transition, as the program will either be raw or recursed, the verifying key is an unique index in such collection.
This document describes a sparse Merkle tree design that employs indexing proofs based on the hash of the verification key.
Merkle tree
A Merkle tree is typically a (binary) tree structure consisting of leaves and nodes. Each node in this tree represents the cryptographic hash of its children, while the leaves hold an arbitrary piece of data—usually the hash value of some variable input.
For a hash function H
, if we insert the data items A, B, C into a Merkle tree, the resulting structure would look like:
graph TB %% Root node r["R := H(t10, t11)"] %% Level 1 m1["t10 := H(t00, t01)"] --> r m2["t11 := H(t02, t03)"] --> r %% Level 2 c1["t00 := H(A)"] --> m1 c2["t01 := H(B)"] --> m1 c3["t02 := H(C)"] --> m2 c4["t03 := 0"] --> m2
Membership proof
A Merkle tree serves as an efficient data structure for validating the membership of a leaf node within a set in logarithmic time, making it especially useful for handling large sets. A Merkle opening (or Merkle proof) represents an array of sibling nodes that outline a Merkle Path leading to a commitment Root. If the verifier possesses the root and employs a cryptographic hash function, the pre-image of the hash is non-malleable; in a cryptographic hash, it's unfeasible to discover a set of siblings resulting in the root, except for the valid inputs. Given that the leaf node is known to the verifier, a Merkle Proof will consist of a sequence of hashes leading up to the root. This allows the verifier to compute the root value and compare it with the known Merkle root, thereby confirming the membership of any provided alleged member without relying on the trustworthiness of the source. Consequently, a single hash commitment ensures that any verifier can securely validate the membership of any proposed member supplied by an untrusted party.
On the example above, the Merkle opening for C
is the siblings of the path until the root, that is: [t03, t10]
. The verifier, that knows R
beforehand, will compute:
t02 := H(C)
t11 := H(t02, t03)
R' := H(t10, t11)
If R == R'
, then C
is a member of the set.
Note that the depth of the tree is the length of its Merkle opening, that is: we open up to a node with depth equal to the length of the proof.
Sparse Data
Let's consider a public function f
that accepts a member and returns a tuple. This tuple consists of the index within the tree as a u64
value, and the hash of the leaf: (i, h) = f(X)
.
For the example above, let’s assume two members:
(3, a) := f(A)
(1, b) := f(B)
graph TB %% Root node r["R := H(t10, t11)"] %% Level 1 m1["t10 := H(t00, t01)"] --> r m2["t11 := H(t02, t03)"] --> r %% Level 2 c1["t00 := 0"] --> m1 c2["t01 := b"] --> m1 c3["t02 := 0"] --> m2 c4["t03 := a"] --> m2
The primary distinction of a sparse Merkle tree lies in the deterministic leaf index, making it agnostic to input order. In essence, this structure forms an unordered set whose equivalence remains consistent irrespective of the sequence in which items are appended.
The behavior of the membership proof in this context mirrors that in a traditional Merkle tree, except that a sparse Merkle tree enables the generation of a non-membership proof. To achieve this, we carry out a Merkle opening at the specified target index, and expect it to be 0
.
Let’s assume a non-member X
to be (0, x) := f(X)
. To prove non-membership, we broadcast [b, t11]
. To verify the non-membership of X
, knowing R
and the non-membership proof, we:
(0, x) := f(X)
t10 := H(0, b)
; here we open to0
R’ := H(t10, t11)
If R == R'
, then 0
is at the slot of X
. Since we know X
to not be the pre-image of 0
in H
, then X
is not a member of the tree.
Valence SMT
Within the scope of Valence, the sparse Merkle tree is designed to utilize the hash of the verifying key generated by the ZK circuit as its index. The tree's leaf data will encompass the proof and input arguments for the ZK program. In this particular implementation, we can consider the input arguments as a generic type, which will be specifically defined during development. These input arguments will constitute the key-value pairs that define a subset of the contract state essential for state transition. The proof will be a vector of bytes.
The tree depth will be adaptive, representing the smallest feasible value required to traverse from the leaf nodes up to the root, given the number of elements involved. This approach ensures we avoid unnecessary utilization of nodes containing unused entries.
For instance, if the tree contains two adjacent nodes indexed at [(0,0), (0,1)]
, the Merkle opening proof will have a single element—specifically the sibling leaf of the validated node.
In case the tree comprises two nodes with indices [(0,0), (0,2)]
, the Merkle opening will require two elements, allowing for a complete traversal from the leaves to the root.
Precomputed empty subtrees
Every Merkle tree implementation should include a pre-computed set of empty subtrees, based on the selected hash primitive. To avoid unnecessary computational expenditure, it is more efficient to pre-compute the roots of subtrees consisting solely of zeroed leaves. For instance, all the nodes of the following Merkle tree are constant values for H
:
graph TB %% Root node r["R := H(t10, t11)"] %% Level 1 m1["t10 := H(t00, t01)"] --> r m2["t11 := H(t02, t03)"] --> r %% Level 2 c1["t00 := 0"] --> m1 c2["t01 := 0"] --> m1 c3["t02 := 0"] --> m2 c4["t03 := 0"] --> m2
Let’s assume we have a long path on a sparse Merkle tree with a single leaf X
with index 2:
graph TB %% Root r["R := H(t20, K2)"] %% Level 1 t20["t20 := H(K1, t11)"] --> r t21["K2"] --> r %% Level 2 m1["K1"] --> t20 m2["t11 := H(X, K0)"] --> t20 %% Level 3 c3["X"] --> m2 c4["K0"] --> m2
It would be a waste to compute (K0, K1, K2)
here as they are, respectively, K0 := H(0)
, K1 := H(K0, K0)
, K2 := (K1, K1)
. In other words, they are constant values that should be available and should never have to hit the database backend in order to have their values fetched, nor should they exist as a data node. Whenever the tree queries for a node that doesn't exist on the data backend, it should return the constant precomputed empty subtree for that depth.
Normally, the trees will support precomputed values up to a certain depth. If we adopt a 16 bits output hash function, we should have 16 precomputed empty subtrees.
Future upgrades
We don't expect the MVP to be optimized. That is, we should have a working implementation, but not yet optimized to specific use-case.
- Hash: In the context of sparse Merkle trees, the MVP could employ a widely-accepted cryptographic hash function as its fundamental building block. For example, the Keccak256, which is native to EVM, could be used due to its broad availability. However, utilizing this hash function may lead to an extensive gap between nodes, potentially resulting in a tree structure with only 2 leaves yet a significant depth, as the hashes of the two verifying keys might be exceptionally far apart. A future improvement would be to choose of a cryptographic hash that keeps the leaf nodes close. One cheap method to achieve this is by taking the initial
n
bits (e.g., 16) of the hash output and using it as an index, given that any secure cryptographic hash maintains its collision resistance and avalanche effect characteristics across the target security level with the selected number of sampled bits. While we anticipate not dealing with a large number of program, (i.e. a 256-bit number, 16 bits should be more than sufficient for this purpose. - Data backend: In typical scenarios, the number of nodes in a proof batch shouldn't be large: 8 bits should suffice to represent the number of programs; for very complex and large batches, 16 bits should suffice. Choosing a database backend for a Merkle tree can be challenging because it involves deciding on storage methodologies and optimizing database seek operations to retrieve nodes from the same path on a single page when possible. However, with a limited number of nodes, a streamlined database backend could suffice, delivering requested nodes without regard for the total page count. Given this performance constraint, we should prioritize compatibility over optimization: the ability to use the same backend across multiple blockchain clients and execution environments is more crucial than fine-tuning something that functions well only under specific conditions.
Authorization & Processors
The Authorization and Processor contracts are foundational pieces of the Valence Protocol, as they enable on-chain (and cross-chain) execution of Valence Programs and enforce access control to the program's Subroutines via Authorizations.
This section explains the rationale for these contracts and shares insights into their technical implementation, as well as how end-users can interact with Valence Programs via Authorizations.
Rationale
- To have a general purpose set of smart contracts that provide users with a single point of entry to interact with the Valence Program, which can have libraries and accounts deployed on multiple chains.
- To have all the user authorizations for multiple domains in a single place, making it very easy to control the application.
- To have a single address (Processor) that will execute the messages for all the contracts in a domain using execution queues.
- To only tick a single contract (Processor) that will go through the queues to route and execute the messages.
- To create, edit, or remove different application permissions with ease.
Assumptions
-
Funds: You cannot send funds with the messages.
-
Bridging: We are assuming that messages can be sent and confirmed bidirectionally between domains. The Authorization contract on the main domain communicates with the processor in a different domain in one direction and the callback confirming the correct or failed execution in the other direction.
-
Instantiation: All these contracts can be instantiated beforehand and off-chain having predictable addresses. Here is an example instantiation flow using Polytone:
- Predict
authorization
contract address. - Instantiate polytone contracts & set up relayers.
- Predict
proxy
contract address for theauthorization
contract on each external domain. - Predict
proxy
contract address on the main domain for each processor on external domains. - Instantiate all
processors
. The sender on external domains will be the predictedproxy
and on the main domain it will be the Authorization contract iself. - Instantiate Authorization contract with all the processors and their predicted proxies for external domains and the processor on the main domain.
- Predict
-
Relaying: Relayers will be running once everything is instantiated.
-
Tokenfactory: The main domain has the token factory module with no token creation fee so that we can create and mint these nonfungible tokens with no additional cost.
-
Domains: In the current version, actions in each authorization will be limited to a single domain.
Authorization Contract
The Authorization contract will be a single contract deployed on the main domain and that will define the authorizations of the top-level application, which can include libraries in different domains (chains). For each domain, there will be one Processor in charge of executing the functions on the libraries. The Authorization contract will connect to all of the Processor contracts using a connector (e.g. Polytone Note, Hyperlane Mailbox…) and will route the message batches to be executed to the right domain. At the same time, for each external domain, we will have a proxy contract (e.g. Polytone Proxy, Hyperlane Mailbox...) in the main domain which will receive the callbacks sent from the processor on the external domain with the ExecutionResult
of the MessageBatch
.
The contract will be instantiated once at the very beginning and will be used during the entire top-level application lifetime. Users will never interact with the individual Smart Contracts of each program, but with the Authorization contract directly.
Instantiation
When the contract is instantiated, it will be provided the following information:
-
Processor contract on main domain.
-
Owner of the contract.
-
List of subowners (if any). Users that can execute the same actions as the owner except adding/removing other subowners.
Once the Authorization contract is deployed, we can already start adding and executing authorizations on the domain that the Authorization contract was deployed on. To execute functions on other domains, the owner will have to add external domains to the Authorization contract with all the information required for the Authorization contract to route the messages to that domain.
Owner Functions
-
create_authorizations(vec[Authorization])
: provides an authorization list which is the core information of the Authorization contract, it will include all the possible set of functions that can be executed. It will contain the following information:-
Label: unique name of the authorization. This label will be used to identify the authorization and will be used as subdenom of the tokenfactory token in case it is permissioned. Due to tokenfactory module restrictions, the max length of this field is 44 characters. Example: If the label is
withdraw
and only addressneutron123
is allowed to execute this authorization, we will create the tokenfactory/<contract_addr>/withdraw
and mint one to that address. Ifwithdraw
was permissionless, there is no need for any token, so it's not created. -
Mode: can either be
Permissioned
orPermissionless
. IfPermissionless
is chosen, any address can execute this function list. In case ofPermissioned
, we will also say what type of permissioned type we want (withCallLimit
or without), a list of addresses will be provided for both cases. In case there is aCallLimit
we will mint a certain amount of tokens for each address that is passed, in case there isn’t we will only mint one token and that token will be used all the time. -
NotBefore: from what time the authorization can be executed. We can specify a block height or a timestamp.
-
Expiration: until when (what block or timestamp) this authorization is valid.
-
MaxConcurrentExecutions (default 1): to avoid DDoS attacks and to clog the execution queues, we will allow certain authorizations subroutines to be present a maximum amount of times (default 1 unless overwritten) in the execution queue.
-
Subroutine: set of functions in a specific order to be executed. Subroutines can be of two types:
Atomic
orNonAtomic
. For theAtomic
subroutines, we will provide an array ofAtomic
functions, an optionalexpiration_time
and an optionalRetryLogic
for the entire subroutine. For theNonAtomic
subroutines we will just provide an array ofNonAtomic
functions and an optionalexpiration_time
. Theexpiration_time
defines how long messages that are executing a subroutine will be valid for once they are sent from the authorization contract. This is particularly useful for domains that use relayers without timeouts (e.g. Hyperlane). If theexpiration_time
is not provided, the relayer can go down for an indefinite amount of time and the messages will still be valid and execute when it's back up. If theexpiration_time
is provided, the messages will be valid for that amount of time, by adding the current block timestamp to theexpiration_time
, and if the relayer is down for longer than that, the messages will be considered expired once the execution is attempted in the Processor contract, returning anExpired
result.-
AtomicFunction
: each Atomic function has the following parameters:-
Domain of execution (must be the same for all functions in v1).
-
MessageDetails: type (e.g. CosmwasmExecuteMsg, EvmCall ...) and message information. Depending on the type of the message that is being sent, we might need to provide additional values and/or only some specific
ParamRestrictions
can be applied:- If we are sending messages that are not for a
CosmWasm ExecutionEnvironment
and the message passed doesn't contain Raw bytes for that particular VM (e.g.EvmRawCall
), we need to provide theEncoder
information for that message along with the name of the library that theEncoder
will use to encode that message. For example, if we are sending a message for anEvmCall
on an EVM domain, we need to provide the address of theEncoder Broker
and theversion
of theEncoder
that the broker needs to route the message to along with the name of the library that theEncoder
will use to encode that message (e.g.forwarder
). - For all messages that are not raw bytes (
json
formatted), we can apply any of the followingParamRestrictions
:MustBeIncluded
: the parameter must be included in the message.CannotBeIncluded
: the parameter cannot be included in the message.MustBeValue
: the parameter must have a specific value.
- For all messages that are raw bytes, we can only apply the
MustBeBytes
restriction, which matches that the bytes sent are the same as the ones provided in restriction, limiting the authorization execution to only one specific message.
- If we are sending messages that are not for a
-
Contract address that will execute it.
-
-
NonAtomicFunction
: each NonAtomic function has the following parameters:-
Domain of execution
-
MessageDetails (same as above).
-
Contract address that will execute it.
-
RetryLogic (optional, self-explanatory).
-
CallbackConfirmation (optional): This defines if a
NonAtomicFunction
is completed after receiving a callback (Binary) from a specific address instead of after a correct execution. This is used in case of the correct message execution not being enough to consider the message completed, so it will define what callback we should receive from a specific address to flag that message as completed. For this, the processor will append anexecution_id
to the message which will be also passed in the callback by the service to identify what function this callback is for.
-
-
-
Priority (default Med): priority of a set of functions can be set to High. If this is the case, they will go into a preferential execution queue. Messages in the
High
priority queue will be taken over messages in theMed
priority queue. All authorizations will have an initial state ofEnabled
.
Here is an example of an Authorization table after its creation:
-
-
add_external_domains([external_domains])
: to add anExternalDomain
to the Authorization contract, the owner will specify what type ofExecutionEnvironment
it has (e.g.CosmWasm
,Evm
...) and all the information required for each type ofExecutionEnvironment
. For example, if the owner is adding a domain that usesCosmWasm
as ExecutionEnvironment, they need to provide all the Polytone information; if they are adding a domain that usesEVM
as ExecutionEnvironment, they need to provide all the Hyperlane information and theEncoder
to be used for correctly encoding messages in the corresponding format. -
modify_authorization(label, updated_values)
: can modify certain updatable fields of the authorization: start_time, expiration, max_concurrent_executions and priority. -
disable_authorization(label)
: puts an Authorization to stateDisabled
. These authorizations can not be run anymore. -
enable_authorization(label)
: puts an Authorization to stateEnabled
so that they can be run again. -
mint_authorization(label, vec[(addresses, Optional: amounts)])
: if the authorization isPermissioned
withCallLimit: true
, this function will mint the corresponding token amounts of that authorization to the addresses provided. IfCallLimit: false
it will mint 1 token to the new addresses provided. -
pause_processor(domain)
: pause the processor of the domain. -
resume_processor(domain)
: resume the processor of the domain. -
insert_messages(label, queue_position, queue_type, vec[ProcessorMessage])
: adds these set of messages to the queue at a specific position in the queue. -
evict_messages(label, queue_position, queue_type)
: remove the set of messages from the specific position in a queue. -
add_sub_owners(vec[addresses])
: add the current addresses as 2nd tier owners. These sub_owners can do everything except adding/removing admins. -
remove_sub_owners(vec[addresses])
: remove these addresses from the sub_owner list.
User Actions
-
send_msgs(label, vec[ProcessorMessage])
: users can run an authorization with a specific label. If the authorization isPermissioned (without limit)
, the Authorization contract will check if their account is allowed to execute by checking that the account holds the token in its wallet. If the authorization isPermissioned (with limit)
the account must attach the authorization token to the contract execution. Along with the authorization label, the user will provide an array of encoded messages, together with the message type (e.g.CosmwasmExecuteMsg
,EvmCall
, etc.) and any other parameters for that specific ProcessorMessage (e.g. for aCosmwasmMigrateMsg
we need to also pass a code_id). The contract will then check that the messages match those defined in the authorization, that the messages appear in correct order, and that any applied parameter restrictions are correct.If all checks are correct, the contract will route the messages to the correct Processor with an
execution_id
for the processor to callback with. Thisexecution_id
is unique for the entire application. If execution of all actions is confirmed via a callback, the authorization token is burned. If execution fails, the token is sent back. Here is an example flowchart of how a user interacts with the Authorization contract to execute functions on an external CosmWasm domain that is connected to the main domain with Polytone:
Processor Contract
The Processor will be a contract on each domain within the program. The Processor handles execution of message batches it receives from the Authorization contract. Depending on the Processor type in use, its features will vary. There are currently two types of processors: Lite Processor and Processor. The former is a simplified version of the latter. The Lite Processor has limited functionality to optimize for gas-constrained domains.
The Processor will be instantiated in advance with the correct address that can send messages to them, according to the InstantiationFlow described in the Assumptions section.
In the table below we summarize the main characteristics of the processors supported:
Processor | Lite Processor | |
---|---|---|
Execution Environment | CosmWasm | EVM |
Stores batches in queues | Yes, FIFO queue with priority | No, executed immediately by relayer |
Needs to be ticked | Yes, permissionlessly | No |
Messages can be retried | Yes | No |
Can confirm non-atomic function with callback | Yes | No |
Supports Pause operation | Yes | Yes |
Supports Resume operation | Yes | Yes |
Supports SendMsgs operation | Yes | Yes |
Supports InsertMsgs operation | Yes | No, no queues to insert in |
Supports EvictMsgs operation | Yes | No, no queues to remove from |
Processor
This version of the processor is currently available for CosmWasm
Execution Environment only. It contains all the features and full functionality of the processor as described below.
It handles two execution queues: High
and Med
, which allow giving different priorities to message batches. The Authorization contract will send the message batches to the Processor specifying the priority of the queue where they should be enqueued.
The Processor can be ticked
permissionlessly, which will trigger the execution of the message batches in the queues in a FIFO
manner. It will handle the Retry
logic for each batch (if the batch is atomic) or function (if the batch is non-atomic). In the particular case that the current batch at the top of the queue is not retriable yet, the processor will rotate it to the back of the queue. After a MessageBatch
has been executed successfully or it reached the maximum amount of retries, it will be removed from the execution queue and the Processor will send a callback with the execution information to the Authorization contract.
The Authorization contract will be the only address allowed to add message batches to the execution queues. It will also be allowed to Pause/Resume the Processor or to arbitrarily remove functions from the queues or add certain messages at a specific position in any of them.
Execution
When a processor is Ticked
, the first Message Batch
will be taken from the queue (High
if there are batches there or Med
if there aren’t).
After taking the Message Batch
, the processor will first check if the batch is expired. If that's the case, the processor will discard the batch and return an Expired(executed_functions)
ExecutionResult
to the Authorization contract. There might be a case that the batch is NonAtomic
and it's already partially executed, therefore the processor also returns the number of functions that were executed before the expiration.
If the batch has not expired, the processor will execute the batch according to whether it is Atomic
or NonAtomic
.
-
For
Atomic
batches, the Processor will execute either all functions or none of them. If execution fails, the batchRetryLogic
is checked to determine if the match should be re-enqueued. If not, a callback is sent with aRejected(error)
status to the Authorization contract. If the execution succeeded we will send a callback withExecuted
status to the Authorization contract. -
For
NonAtomic
batches, we will execute the functions one by one and applying the RetryLogic individually to each function if they fail.NonAtomic
functions might also be confirmed viaCallbackConfirmations
in which case we will keep them in a separate storage location until we receive that specific callback. Each time a function is confirmed, we will re-queue the batch and keep track of what function we have to execute next. If at some point a function uses up all its retries, the processor will send a callback to the Authorization contract with aPartiallyExecuted(num_of_functions_executed, execution_error)
execution result if some succeeded orRejected(error)
if none did. If all functions are executed successfully, anExecuted
execution result will be sent. ForNonAtomic
batches, the processor must be ticked each time the batch is at the top of the queue to continue, so at least as many ticks will be required as the number of functions in the batch.
Storage
The Processor will receive message batches from the Authorization contract and will enqueue them in a custom storage structure called a QueueMap
. This structure is a FIFO queue with owner privileges, which allow the owner to insert or remove messages from any position in the queue.
Each “item” stored in the queue is a MessageBatch
object that has the following structure:
#![allow(unused)] fn main() { pub struct MessageBatch { pub id: u64, pub msgs: Vec<ProcessorMessage>, pub subroutine: Subroutine, pub priority: Priority, pub retry: Option<CurrentRetry>, } }
- id: represents the global id of the batch. The Authorization contract, to understand the callbacks that it will receive from each processor, identifies each batch with an id. This id is unique for the entire application.
- msgs: the messages the processor needs to execute for this batch (e.g. a CosmWasm ExecuteMsg or MigrateMsg).
- subroutine: This is the config that the authorization table defines for the execution of these functions. With this field we can know if the functions need to be executed atomically or not atomically, for example, and the retry logic for each batch/function depending on the config type.
- priority (for internal use): batches will be queued in different priority queues when they are received from the Authorization contract. We also keep this priority here because they might need to be re-queued after a failed execution and we need to know where to re-queue them.
- retry (for internal use): we are keeping the current retry we are at (if the execution previously failed) to know when to abort if we exceed the max retry amounts.
Lite Processor
This is a simplified version of the Processor contract, with more limited functionality that is optimized for specific domains where gas costs are critical. This version of the processor is currently available for EVM
execution environments only.
The main difference between the Lite Processor and the Processor is that the former does not store message batches, but instead executes messages directly when received. The Lite Processor does not handle retries, function callbacks, or queues. More details can be found below.
Execution
The Lite Processor is not ticked
, instead it will receive a MessageBatch
from the Authorization contract and execute it immediately. Therefore, the execution gas cost will be paid by the relayer of the batch instead of the user who ticks the processor.
There might be a case that the MessageBatch
received is already expired, which can happen if the relayer was not working or was slow to send the batch. In this case, the Processor will discard the batch and return an Expired(0)
ExecutionResult
to the Authorization contract.
This processor does not store batches or use any queue, instead it will simply receive the batch, execute it atomically or non-atomically, and send a callback to the Authorization contract with the ExecutionResult
. The only information stored by this processor is the information of the Authorization contract, the information of the Connector (e.g. Hyperlane Mailbox, origin domain id, ...) and the authorized entities that can also execute batches on it without requiring them to be sent from the main domain.
Since there are no queues, operations like InsertAt
or RemoveFrom
queue that the owner of the Authorization Contract may perform on the Processor are not available on the Lite Processor. Therefore the operations that the Lite Processor supports from the Authorization contract are limited to: Pause
, Resume
and SendMsgs
.
In addition to the limitations above, the Lite Processor does not support retries or function callbacks. This means that the MessageBatch
received will be executed only once and the NonAtomic
batches can not be confirmed asynchronously because batch execution will be attempted once, non-atomically, the moment it is received.
In addition to executing batches that come from the Authorization contract, the Lite Processor defines a set of authorized addresses that can send batches to it for execution. Since the Processor can execute batches from any address, we only send a callback if the address that sent the batch is a smart contract. Thus the authorized addresses are in charge of the handling/ignoring of these callbacks.
Execution Environment Differences
Depending on the type of ExecutionEnvironment
being used, the behavior of the Processor may vary. In this section we will describe the main differences in how the Processor behaves in the different execution environments that we support.
Execution Success
During the execution of a MessageBatch
, the Processor will execute each function of the subroutine of that batch. If the execution for a specific function fails, we will consider the execution failed in case of Atomic
batches, and we will stop the execution of the next function in case of NonAtomic
batches.
Currently, in the CosmWasm
execution environment, a function fails if the CosmWasm
contract that we are targeting doesn't exist, if the entry point
of that contract doesn't exist, or if the execution of the contract fails for any reason. On the contract, in the EVM
execution environment, a function only fails if the contract explicitly fails or reverts.
To mitigate the differences in behavior between these two execution environments, an EVM
Processor check was included to check if the contract indeed exists and fail execution if the contract does not exist. Behavior was also added in the EVM
libraries to revert if the execution of the contract enters the fallback function, which is not allowed in the system. Nevertheless, since Processors are not restricted to Valence Libraries
but can call any contract, no guarantee can be made that the contract targeted will fail if an entry point does not exist, because the fallback function might not be defined or might not revert.
In CosmWasm
, execution of a contract will always fail if the entry point does not exist. However, for EVM
execution, this is not necessarily the case. This is a difference that the owner of the program must take into account when designing and creating their program.
In summary: if a function of the subroutine targets a contract that meets all of the following conditions:
- It is not a
Valence Library
. - The entry point of that contract does not exist.
- The fallback function is either not defined or doesn't explicitly revert.
The execution of that function will be considered successful in the EVM
execution environment but not in the CosmWasm
execution environment equivalent.
Callbacks
There are different types of callbacks in our application. Each of them have a specific function and are used in different parts of the application.
Function Callbacks
For the execution of NonAtomic
batches, each function in the batch can optionally be confirmed with a callback from a specific address. When the processor reaches a function that requires a callback, it will inject the execution_id of the batch into the message that is going to be executed on the library, which means that the library needs to be ready to receive that execution_id and know what the expected callback is and from where it has to come from to confirm that function, otherwise that function will stay unconfirmed and the batch will not move to the next function. The callback will be sent to the processor with the execution_id so that the processor can know what function is being confirmed. The processor will then validate that the correct callback was received from the correct address.
If the processor receives the expected callback from the correct address, the batch will move to the next function. If it receives a different callback than expected from that address, the execution of that function is considered to have failed and it will be retried (if applicable). In either case, a callback must be received to determine if the function was successful or not.
Note: This functionality is not available on the Lite Processor, as this version of the processor is not able to receive asynchronous callbacks from libraries.
Processor Callbacks
Once a Processor batch is executed or it fails and there are no more retries available, the Processor will send a callback to the Authorizations contract with the execution_id of the batch and the result of the execution. All this information will be stored in the Authorization contract state so the history of all executions can be queried from it. This is how a ProcessorCallback
looks like:
#![allow(unused)] fn main() { pub struct ProcessorCallbackInfo { // Execution ID that the callback was for pub execution_id: u64, // Timestamp of entry creation pub created_at: u64, // Timestamp of last update of this entry pub last_updated_at: u64, // Who started this operation, used for tokenfactory actions pub initiator: OperationInitiator, // Address that can send a bridge timeout or success for the message (if applied) pub bridge_callback_address: Option<Addr>, // Address that will send the callback for the processor pub processor_callback_address: Addr, // Domain that the callback came from pub domain: Domain, // Label of the authorization pub label: String, // Messages that were sent to the processor pub messages: Vec<ProcessorMessage>, // Optional ttl for re-sending in case of bridged timeouts pub ttl: Option<Expiration>, // Result of the execution pub execution_result: ExecutionResult, } #[cw_serde] pub enum ExecutionResult { InProcess, // Everthing executed successfully Success, // Execution was rejected, and the reason Rejected(String), // Partially executed, for non-atomic function batches // Indicates how many functions were executed and the reason the next function was not executed PartiallyExecuted(usize, String), // Removed by Owner - happens when, from the authorization contract, a remove item from queue is sent RemovedByOwner, // Timeout - happens when the bridged message times out // We'll use a flag to indicate if the timeout is retriable or not // true - retriable // false - not retriable Timeout(bool), // Expired - happens when the batch wasn't executed in time according to the subroutine configuration // Indicates how many functions were executed (non-atomic batches might have executed some functions before the expiration) Expired(usize), // Unexpected error that should never happen but we'll store it here if it ever does UnexpectedError(String), } }
The key information from here is the label
, to identify the authorization that was executed; the messages
, to identify what the user sent; and the execution_result
, to know if the execution was successful, partially successful or rejected.
Bridge Callbacks
When messages need to be sent through bridges because we are executing batches on external domains, we need to know if, for example, a timeout happened and keep track of it. For this reason we have callbacks per bridge that we support and specific logic that will be executed if they are received. For Polytone
timeouts, we will check if the ttl
field has not expired and allow permissionless retries if it's still valid. In case the ttl
has expired, we will set the ExecutionResult to timeout and not retriable, then send the authorization token back to the user if the user sent it to execute the authorization.
Connectors
Connectors are a way for the Authorization contract in the main domain to interact with external domains. When adding an ExternalDomain
to the Authorization contract, depending on the ExecutionEnvironment
we must specify the Connector information to be used. These connectors are responsible for receiving the message batches from the Authorization contract and trigger the necessary actions for the relayers to pick them up and deliver them to the Processor contract in the ExternalDomain
. The connector on the ExternalDomain
will also receive callbacks with the ExecutionResult
from the Processor contract and send them back to the Authorization contract.
We currently support the following connectors:
Polytone
To connect ExternalDomains
that use CosmWasm
as ExecutionEnvironment
we use Polytone. Polytone is a set of smart contracts that are instantiated on both domains that implement logic to pass messages to each other using IBC. Polytone consists of the following contracts:
- Polytone Note: contract responsible of sending the messages from the Authorization contract to the Processor contract on the external domain and receiving the callback from the Processor contract on the external domain and sending it back to the Authorization contract.
- Polytone Voice: contract that receives the message from Polytone Note and instantiates a Polytone Proxy for each sender that will redirect the message to the destination.
- Polytone Proxy: contract instantiated by Polytone Voice responsible for sending messages received from Polytone Note to the corresponding contract.
To connect the Authorization contract with an external domain that uses Polytone as a connector, we need to provide the Polytone Note address and the predicted Polytone Proxy addresses for both the Authorization contract (when adding the domain) and the Processor Contract (when instantiating the Processor). An IBC relayer must relay these two channels to enable communication.
This is the sequence of messages when using Polytone as a connector:
graph TD %% Execution Result Sequence subgraph Execution_Sequence [Execution Result Sequence] E2[Processor Contract] D2[Polytone Note on External Domain] C2[Polytone Voice on Main Domain] B2[Polytone Proxy on Main Domain] A2[Authorization Contract] E2 -->|Step 5: Execution Result| D2 D2 -->|Step 6: Relayer| C2 C2 -->|Step 7: Instantiate & Forward Result| B2 B2 -->|Step 8: Execution Result| A2 end %% Message Batch Sequence subgraph Batch_Sequence [Message Batch Sequence] A1[Authorization Contract] B1[Polytone Note on Main Domain] C1[Polytone Voice on External Domain] D1[Polytone Proxy on External Domain] E1[Processor Contract] A1 -->|Step 1: Message Batch| B1 B1 -->|Step 2: Relayer| C1 C1 -->|Step 3: Instantiate & Forward Batch| D1 D1 -->|Step 4: Message Batch| E1 end
Hyperlane
To connect ExternalDomains
that use EVM
as ExecutionEnvironment
we use Hyperlane. Hyperlane is a set of smart contracts that are deployed on both domains and communicate with one another using the Hyperlane Relayer
. The required Hyperlane contracts are the following:
- Mailbox: contract responsible for receiving the message for another domain and emitting an event with the message to be picked up by the relayer. The mailbox will also receive messages to be executed on a domain from the relayers and will route them to the correct destination contract.
To connect the Authorization contract with an external domain that uses Hyperlane as a connector, we need to provide the Mailbox address for both the Authorization contract (when adding the domain) and the Processor contract (when instantiating the Processor). A Hyperlane Relayer must relay these two domains using the Mailbox addresses to make the communication possible.
NOTE: There are other Hyperlane contracts that need to be used to set-up Hyperlane, but they are not used in the context of the Authorization contract or the Processor. For more information on how this works, check Hyperlane's documentation or see the Ethereum integration tests we have, where we set up all the required Hyperlane contracts and the relayer in advance before creating our EVM Program.
This is the sequence of messages when using Hyperlane as a connector:
graph TD %% Execution Result Sequence subgraph Execution_Sequence [Execution Result Sequence] E2[Processor Contract] D2[Mailbox on External Domain] C2[Mailbox on Main Domain] B2[Authorization Contract] E2 -->|Step 5: Execution Result| D2 D2 -->|Step 6: Relayer| C2 C2 -->|Step 7: Execution Result| B2 end %% Message Batch Sequence subgraph Batch_Sequence [Message Batch Sequence] A1[Authorization Contract] B1[Mailbox on Main Domain] C1[Mailbox on External Domain] D1[Processor Contract] A1 -->|Step 1: Message Batch| B1 B1 -->|Step 2: Relayer| C1 C1 -->|Step 3: Message Batch| D1 end
Encoding
When messages are passed between the Authorization contract and a Processor contract on a domain that is not using a CosmWasm ExecutionEnvironment
, we need to encode the messages in a way that the Processor contract and the Libraries it calls can understand them. To do this two new contracts were created: Encoder Broker
and Encoder
.
Encoder Broker
The Encoder Broker
is a very simple contract that will route the messages to the correct Encoder
contract. It maps from Encoder Version
to Encoder Contract Address
. The Encoder Broker
will be instantiated once on the Main Domain
with an owner that can add/remove these mappings. An example of Mapping can be "evm_encoder_v1"
to <encoder_contract_address_on_neutron>
. The Encoder Broker
has two queries: Encode
and Decode
, which routes the message to encode/decode to the Encoder Version
specified.
Encoder
The Encoder
is the contract that will encode/decode the messages for a specific ExecutionEnvironment
. It will be instantiated on the Main Domain
an added to the Encoder Broker
with a version. Encoders
are defined for a specific ExecutionEnvironment
and have an Encode
and Decode
query where we provide the Message to be encoded/decoded. Here is an example of how these queries are performed:
#![allow(unused)] fn main() { fn encode(message: ProcessorMessageToEncode) -> StdResult<Binary> { match message { ProcessorMessageToEncode::SendMsgs { execution_id, priority, subroutine, messages, } => send_msgs::encode(execution_id, priority, subroutine, messages), ProcessorMessageToEncode::InsertMsgs { execution_id, queue_position, priority, subroutine, messages, } => insert_msgs::encode(execution_id, queue_position, priority, subroutine, messages), ProcessorMessageToEncode::EvictMsgs { queue_position, priority, } => evict_msgs::encode(queue_position, priority), ProcessorMessageToEncode::Pause {} => Ok(pause::encode()), ProcessorMessageToEncode::Resume {} => Ok(resume::encode()), } } fn decode(message: ProcessorMessageToDecode) -> StdResult<Binary> { match message { ProcessorMessageToDecode::HyperlaneCallback { callback } => { Ok(hyperlane::callback::decode(&callback)?) } } } }
As we can see above, the Encoder
will have a match statement for each type of message that it can encode/decode. The Encoder
will be able to encode/decode messages for a specific ExecutionEnvironment
. In the case of ProcessorMessages
that include messages for a specific library, these messages will include the Library they are targeting. This allows the Encoder
to apply the encoding/decoding logic for that specific library.
This Encoder
will be called internally through the Authorization contract when the user sends a message to it. Here is an example of this execution flow:
- The owner adds an
ExternalDomain
with anEVM ExecutionEnvironment
to the Authorization contract, specifying theEncoder Broker
address and theEncoder Version
to be used. - The owner creates an authorization with a subroutine with an
AtomicFunction
that is ofEvmCall(EncoderInfo, LibraryName)
type. - A user executes this authorization passing the message. The Authorization contract will route the message to the
Encoder Broker
with theEncoder Version
specified inEncoderInfo
and passing theLibraryName
to be used for the message. - The
Encoder Broker
will route the message to the correctEncoder
contract, which will encode the message for that particular library and return the encoded bytes to the Authorization Contract. - The Authorization contract will send the encoded message to the Processor contract on the
ExternalDomain
, which will be able to decode and interpret the message.
We currently have an Encoder
for EVM
messages, however more Encoders will be added as we support additional ExecutionEnvironments
.
Libraries
This section contains a detailed description of the various libraries that can be used to rapidly build Valence cross-chain programs for each execution environment.
CosmWasm Libraries
This section contains a detailed description of all the libraries that can be used in CosmWasm execution environments.
Astroport LPer library
The Valence Astroport LPer library library allows to provide liquidity into an Astroport Liquidity Pool from an input account and deposit the LP tokens into an output account.
High-level flow
--- title: Astroport Liquidity Provider --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Astroport Liquidity Provider] AP[Astroport Pool] P -- 1/Provide Liquidity --> S S -- 2/Query balances --> IA S -- 3/Compute amounts --> S S -- 4/Do Provide Liquidity --> IA IA -- 5/Provide Liquidity [Tokens] --> AP AP -- 5'/Transfer LP Tokens --> OA
Functions
Function | Parameters | Description |
---|---|---|
ProvideDoubleSidedLiquidity | expected_pool_ratio_range: Option<DecimalRange> | Provide double-sided liquidity to the pre-configured Astroport Pool from the input account, and deposit the LP tokens into the output account. Abort it the pool ratio is not within the expected_pool_ratio range (if specified). |
ProvideSingleSidedLiquidity | asset: String limit: Option<Uint128> expected_pool_ratio_range: Option<DecimalRange> | Provide single-sided liquidity for the specified asset to the pre-configured Astroport Pool from the input account, and deposit the LP tokens into the output account. Abort it the pool ratio is not within the expected_pool_ratio range (if specified). |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { // Account from which the funds are LPed pub input_addr: LibraryAccountType, // Account to which the LP tokens are forwarded pub output_addr: LibraryAccountType, // Pool address pub pool_addr: String, // LP configuration pub lp_config: LiquidityProviderConfig, } pub struct LiquidityProviderConfig { // Pool type, old Astroport pools use Cw20 lp tokens and new pools use native tokens, so we specify here what kind of token we are going to get. // We also provide the PairType structure of the right Astroport version that we are going to use for each scenario pub pool_type: PoolType, // Denoms of both native assets we are going to provide liquidity for pub asset_data: AssetData, // Slippage tolerance pub slippage_tolerance: Option<Decimal>, } #[cw_serde] pub enum PoolType { NativeLpToken(valence_astroport_utils::astroport_native_lp_token::PairType), Cw20LpToken(valence_astroport_utils::astroport_cw20_lp_token::PairType), } pub struct AssetData { pub asset1: String, pub asset2: String, } }
Astroport Withdrawer library
The Valence Astroport Withdrawer library library allows to withdraw liquidity from an Astroport Liquidity Pool from an input account an deposit the withdrawed tokens into an output account.
High-level flow
--- title: Astroport Liquidity Withdrawal --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Astroport Liquidity Withdrawal] AP[Astroport Pool] P -- 1/Withdraw Liquidity --> S S -- 2/Query balances --> IA S -- 3/Compute amounts --> S S -- 4/Do Withdraw Liquidity --> IA IA -- 5/Withdraw Liquidity [LP Tokens] --> AP AP -- 5'/Transfer assets --> OA
Functions
Function | Parameters | Description |
---|---|---|
WithdrawLiquidity | - | Withdraw liquidity from the configured Astroport Pool from the input account and deposit the withdrawed tokens into the configured output account |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { // Account holding the LP position pub input_addr: LibraryAccountType, // Account to which the withdrawn assets are forwarded pub output_addr: LibraryAccountType, // Pool address pub pool_addr: String, // Liquidity withdrawer configuration pub withdrawer_config: LiquidityWithdrawerConfig, } pub struct LiquidityWithdrawerConfig { // Pool type, old Astroport pools use Cw20 lp tokens and new pools use native tokens, so we specify here what kind of token we are will use. // We also provide the PairType structure of the right Astroport version that we are going to use for each scenario pub pool_type: PoolType, } pub enum PoolType { NativeLpToken, Cw20LpToken, } }
Valence Forwarder library
The Valence Forwarder library allows to continuously forward funds from an input account to an output account, following some time constraints. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.
High-level flow
--- title: Forwarder Library --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Forwarder Library] P -- 1/Forward --> S S -- 2/Query balances --> IA S -- 3/Do Send funds --> IA IA -- 4/Send funds --> OA
Functions
Function | Parameters | Description |
---|---|---|
Forward | - | Forward funds from the configured input account to the output account, according to the forwarding configs & constraints. |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { // Account from which the funds are pulled pub input_addr: LibraryAccountType, // Account to which the funds are sent pub output_addr: LibraryAccountType, // Forwarding configuration per denom pub forwarding_configs: Vec<UncheckedForwardingConfig>, // Constraints on forwarding operations pub forwarding_constraints: ForwardingConstraints, } pub struct UncheckedForwardingConfig { // Denom to be forwarded (either native or CW20) pub denom: UncheckedDenom, // Max amount of tokens to be transferred per Forward operation pub max_amount: Uint128, } // Time constraints on forwarding operations pub struct ForwardingConstraints { // Minimum interval between 2 successive forward operations, // specified either as a number of blocks, or as a time delta. min_interval: Option<Duration>, } }
Valence Generic IBC Transfer library
The Valence Generic IBC Transfer library allows to transfer funds over IBC from an input account on a source chain to an output account on a destination chain. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.
Note: this library should not be used on Neutron, which requires some fees to be paid to relayers for IBC transfers. For Neutron, prefer using the dedicated (and optimized) Neutron IBC Transfer library instead.
High-level flow
--- title: Generic IBC Transfer Library --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Gen IBC Transfer Library] subgraph Chain 1 P -- 1/IbcTransfer --> S S -- 2/Query balances --> IA S -- 3/Do Send funds --> IA end subgraph Chain 2 IA -- 4/IBC Transfer --> OA end
Functions
Function | Parameters | Description |
---|---|---|
IbcTransfer | - | Transfer funds over IBC from an input account on a source chain to an output account on a destination chain. |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { struct LibraryConfig { // Account from which the funds are pulled (on the source chain) input_addr: LibraryAccountType, // Account to which the funds are sent (on the destination chain) output_addr: LibraryAccountType, // Denom of the token to transfer denom: UncheckedDenom, // Amount to be transferred, either a fixed amount or the whole available balance. amount: IbcTransferAmount, // Memo to be passed in the IBC transfer message. memo: String, // Information about the destination chain. remote_chain_info: RemoteChainInfo, // Denom map for the Packet-Forwarding Middleware, to perform a multi-hop transfer. denom_to_pfm_map: BTreeMap<String, PacketForwardMiddlewareConfig>, } // Defines the amount to be transferred, either a fixed amount or the whole available balance. enum IbcTransferAmount { // Transfer the full available balance of the input account. FullAmount, // Transfer the specified amount of tokens. FixedAmount(Uint128), } pub struct RemoteChainInfo { // Channel of the IBC connection to be used. channel_id: String, // Port of the IBC connection to be used. port_id: Option<String>, // Timeout for the IBC transfer. ibc_transfer_timeout: Option<Uint64>, } // Configuration for a multi-hop transfer using the Packet Forwarding Middleware struct PacketForwardMiddlewareConfig { // Channel ID from the source chain to the intermediate chain local_to_hop_chain_channel_id: String, // Channel ID from the intermediate to the destination chain hop_to_destination_chain_channel_id: String, // Temporary receiver address on the intermediate chain hop_chain_receiver_address: String, } }
Valence Neutron IBC Transfer library
The Valence Neutron IBC Transfer library allows to transfer funds over IBC from an input account on Neutron to an output account on a destination chain. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.
Note: this library should not be used on another CosmWasm chain than Neutron, which requires some fees to be paid to relayers for IBC transfers. For other CosmWasm chains, prefer using the Generic IBC Transfer library instead.
High-level flow
--- title: Neutron IBC Transfer Library --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Neutron IBC Transfer Library] subgraph Neutron P -- 1/IbcTransfer --> S S -- 2/Query balances --> IA S -- 3/Do Send funds --> IA end subgraph Chain 2 IA -- 4/IBC Transfer --> OA end
Functions
Function | Parameters | Description |
---|---|---|
IbcTransfer | - | Transfer funds over IBC from an input account on Neutron to an output account on a destination chain. |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { struct LibraryConfig { // Account from which the funds are pulled (on the source chain) input_addr: LibraryAccountType, // Account to which the funds are sent (on the destination chain) output_addr: LibraryAccountType, // Denom of the token to transfer denom: UncheckedDenom, // Amount to be transferred, either a fixed amount or the whole available balance. amount: IbcTransferAmount, // Memo to be passed in the IBC transfer message. memo: String, // Information about the destination chain. remote_chain_info: RemoteChainInfo, // Denom map for the Packet-Forwarding Middleware, to perform a multi-hop transfer. denom_to_pfm_map: BTreeMap<String, PacketForwardMiddlewareConfig>, } // Defines the amount to be transferred, either a fixed amount or the whole available balance. enum IbcTransferAmount { // Transfer the full available balance of the input account. FullAmount, // Transfer the specified amount of tokens. FixedAmount(Uint128), } pub struct RemoteChainInfo { // Channel of the IBC connection to be used. channel_id: String, // Port of the IBC connection to be used. port_id: Option<String>, // Timeout for the IBC transfer. ibc_transfer_timeout: Option<Uint64>, } // Configuration for a multi-hop transfer using the Packet Forwarding Middleware struct PacketForwardMiddlewareConfig { // Channel ID from the source chain to the intermediate chain local_to_hop_chain_channel_id: String, // Channel ID from the intermediate to the destination chain hop_to_destination_chain_channel_id: String, // Temporary receiver address on the intermediate chain hop_chain_receiver_address: String, } }
Osmosis CL LPer library
The Valence Osmosis CL LPer library library allows to create concentrated liquidity positions on Osmosis from an input account, and deposit the LP tokens into an output account.
High-level flow
--- title: Osmosis CL Liquidity Provider --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Osmosis CL Liquidity Provider] AP[Osmosis CL Pool] P -- 1/Provide Liquidity --> S S -- 2/Query balances --> IA S -- 3/Configure target range --> S S -- 4/Do Provide Liquidity --> IA IA -- 5/Provide Liquidity [Tokens] --> AP AP -- 5'/Transfer LP Tokens --> OA
Concentrated Liquidity Position creation
Because of the way CL positions are created, there are two ways to achieve it:
Default
Default position creation centers around the idea of creating a position with respect to the currently active tick of the pool.
This method expects a single parameter, bucket_amount
, which describes
how many buckets of the pool should be taken into account to both sides
of the price curve.
Consider a situation where the current tick is 125, and the configured tick spacing is 10.
If this method is called with bucket_amount
set to 5, the following logic
will be performed:
- find the current bucket range, which is 120 to 130
- extend the current bucket ranges by 5 buckets to both sides, meaning that the range "to the left" will be extended by 5 * 10 = 50, and the range "to the right" will be extended by 5 * 10 = 50, resulting in the covered range from 120 - 50 = 70 to 130 + 50 = 180, giving the position tick range of (70, 180).
Custom
Custom position creation allows for more fine-grained control over the way the position is created.
This approach expects users to specify the following parameters:
tick_range
, which describes the price range to be coveredtoken_min_amount_0
andtoken_min_amount_1
which are optional parameters that describe the minimum amount of tokens that should be provided to the pool.
With this flexibility a wide variety of positions can be created, such as those that are entirely single-sided.
Functions
Function | Parameters | Description |
---|---|---|
ProvideLiquidityDefault | bucket_amount: Uint64 | Create a position on the pre-configured Osmosis Pool from the input account, following the Default approach described above, and deposit the LP tokens into the output account. |
ProvideLiquidityCustom | tick_range: TickRange token_min_amount_0: Option<Uint128> token_min_amount_1: Option<Uint128> | Create a position on the pre-configured Osmosis Pool from the input account, following the Custom approach described above, and deposit the LP tokens into the output account. |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { // Account from which the funds are LPed pub input_addr: LibraryAccountType, // Account to which the LP position is forwarded pub output_addr: LibraryAccountType, // LP configuration pub lp_config: LiquidityProviderConfig, } pub struct LiquidityProviderConfig { // ID of the Osmosis CL pool pub pool_id: Uint64, // Pool asset 1 pub pool_asset_1: String, // Pool asset 2 pub pool_asset_2: String, // Pool global price range pub global_tick_range: TickRange, } }
Osmosis CL liquidity withdrawer library
The Valence Osmosis CL Withdrawer library library allows to withdraw a concentrated liquidity position off an Osmosis pool from an input account, and transfer the resulting tokens to an output account.
High-level flow
--- title: Osmosis CL Liquidity Withdrawal --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Osmosis CL Liquidity Withdrawal] AP[Osmosis CL Pool] P -- 1/Withdraw Liquidity --> S S -- 2/Query balances --> IA S -- 3/Compute amounts --> S S -- 4/Do Withdraw Liquidity --> IA IA -- 5/Withdraw Liquidity [LP Position] --> AP AP -- 5'/Transfer assets --> OA
Functions
Function | Parameters | Description |
---|---|---|
WithdrawLiquidity | position_id: Uint64 liquidity_amount: String | Withdraw liquidity from the configured Osmosis Pool from the input account, according to the given parameters, and transfer the withdrawned tokens to the configured output account |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { // Account holding the LP position pub input_addr: LibraryAccountType, // Account to which the withdrawn assets are forwarded pub output_addr: LibraryAccountType, // ID of the pool pub pool_id: Uint64, } }
Osmosis GAMM LPer library
The Valence Osmosis GAMM LPer library library allows to join a pool on Osmosis, using the GAMM module (Generalized Automated Market Maker), from an input account, and deposit the LP tokens into an output account.
High-level flow
--- title: Osmosis GAMM Liquidity Provider --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Osmosis GAMM Liquidity Provider] AP[Osmosis Pool] P -- 1/Join Pool --> S S -- 2/Query balances --> IA S -- 3/Compute amounts --> S S -- 4/Do Join Pool --> IA IA -- 5/Join Pool [Tokens] --> AP AP -- 5'/Transfer LP tokens --> OA
Functions
Function | Parameters | Description |
---|---|---|
ProvideDoubleSidedLiquidity | expected_spot_price: Option<DecimalRange> | Provide double-sided liquidity to the pre-configured Osmosis Pool from the input account, and deposit the LP tokens into the output account. Abort it the spot price is not within the expected_spot_price range (if specified). |
ProvideSingleSidedLiquidity | asset: String limit: Option<Uint128> expected_spot_price: Option<DecimalRange> | Provide single-sided liquidity for the specified asset to the pre-configured Osmosis Pool from the input account, and deposit the LP tokens into the output account. Abort it the spot price is not within the expected_spot_price range (if specified). |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { // Account from which the funds are LPed pub input_addr: LibraryAccountType, // Account to which the LP position is forwarded pub output_addr: LibraryAccountType, // LP configuration pub lp_config: LiquidityProviderConfig, } pub struct LiquidityProviderConfig { // ID of the Osmosis pool pub pool_id: Uint64, // Pool asset 1 pub pool_asset_1: String, // Pool asset 2 pub pool_asset_2: String, } }
Osmosis GAMM liquidity withdrawer library
The Valence Osmosis GAMM Withdrawer library library allows to exit a pool on Osmosis, using the GAMM module (Generalized Automated Market Maker), from an input account, an deposit the withdrawed tokens into an output account.
High-level flow
--- title: Osmosis GAMM Liquidity Withdrawal --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Osmosis GAMM Liquidity Withdrawal] AP[Osmosis Pool] P -- 1/Withdraw Liquidity --> S S -- 2/Query balances --> IA S -- 3/Compute amounts --> S S -- 4/Do Withdraw Liquidity --> IA IA -- 5/Withdraw Liquidity [LP Tokens] --> AP AP -- 5'/Transfer assets --> OA
Functions
Function | Parameters | Description |
---|---|---|
WithdrawLiquidity | - | Withdraw liquidity from the configured Osmosis Pool from the input account and deposit the withdrawed tokens into the configured output account |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { // Account from which the funds are LPed pub input_addr: LibraryAccountType, // Account to which the LP tokens are forwarded pub output_addr: LibraryAccountType, // Liquidity withdrawer configuration pub withdrawer_config: LiquidityWithdrawerConfig, } pub struct LiquidityWithdrawerConfig { // ID of the pool pub pool_id: Uint64, } }
Valence Reverse Splitter library
The Reverse Splitter library allows to route funds from one or more input account(s) to a single output account, for one or more token denom(s) according to the configured ratio(s). It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.
High-level flow
--- title: Reverse Splitter Library --- graph LR IA1((Input Account1)) IA2((Input Account2)) OA((Output Account)) P[Processor] S[Reverse Splitter Library] C[Contract] P -- 1/Split --> S S -- 2/Query balances --> IA1 S -- 2'/Query balances --> IA2 S -. 3/Query split ratio .-> C S -- 4/Do Send funds --> IA1 S -- 4'/Do Send funds --> IA2 IA1 -- 5/Send funds --> OA IA2 -- 5'/Send funds --> OA
Functions
Function | Parameters | Description |
---|---|---|
Split | - | Split and route funds from the configured input account(s) to the output account, according to the configured token denom(s) and ratio(s). |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { struct LibraryConfig { output_addr: LibraryAccountType, // Account to which the funds are sent. splits: Vec<UncheckedSplitConfig>, // Split configuration per denom. base_denom: UncheckedDenom // Base denom, used with ratios. } // Split config for specified account struct UncheckedSplitConfig { denom: UncheckedDenom, // Denom for this split configuration (either native or CW20). account: LibraryAccountType, // Address of the input account for this split config. amount: UncheckedSplitAmount, // Fixed amount of tokens or an amount defined based on a ratio. factor: Option<u64> // Multiplier relative to other denoms (only used if a ratio is specified). } // Ratio configuration, either fixed & dynamically calculated enum UncheckedRatioConfig { FixedAmount(Uint128), // Fixed amount of tokens FixedRatio(Decimal), // Fixed ratio e.g. 0.0262 for NTRN/STARS (or could be another arbitrary ratio) DynamicRatio { // Dynamic ratio calculation (delegated to external contract) contract_addr: "<TWAP Oracle wrapper contract address>", params: "base64-encoded arbitrary payload to send in addition to the denoms" } } // Standard query & response for contract computing a dynamic ratio // for the Splitter & Reverse Splitter libraries. #[cw_serde] #[derive(QueryResponses)] pub enum DynamicRatioQueryMsg { #[returns(DynamicRatioResponse)] DynamicRatio { denoms: Vec<String>, params: String, } } #[cw_serde] // Response returned by the external contract for a dynamic ratio struct DynamicRatioResponse { pub denom_ratios: HashMap<String, Decimal>, } }
Valence Splitter library
The Valence Splitter library allows to split funds from one input account to one or more output account(s), for one or more token denom(s) according to the configured ratio(s). It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.
High-level flow
--- title: Splitter Library --- graph LR IA((Input Account)) OA1((Output Account 1)) OA2((Output Account 2)) P[Processor] S[Splitter Library] C[Contract] P -- 1/Split --> S S -- 2/Query balances --> IA S -. 3/Query split ratio .-> C S -- 4/Do Send funds --> IA IA -- 5/Send funds --> OA1 IA -- 5'/Send funds --> OA2
Functions
Function | Parameters | Description |
---|---|---|
Split | - | Split funds from the configured input account to the output account(s), according to the configured token denom(s) and ratio(s). |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { struct LibraryConfig { input_addr: LibraryAccountType, // Address of the input account splits: Vec<UncheckedSplitConfig>, // Split configuration per denom } // Split config for specified account struct UncheckedSplitConfig { denom: UncheckedDenom, // Denom for this split configuration (either native or CW20) account: LibraryAccountType, // Address of the output account for this split config amount: UncheckedSplitAmount, // Fixed amount of tokens or an amount defined based on a ratio } // Split amount configuration, either a fixed amount of tokens or an amount defined based on a ratio enum UncheckedSplitAmount { FixedAmount(Uint128), // Fixed amount of tokens FixedRatio(Decimal), // Fixed ratio e.g. 0.0262 for NTRN/STARS (or could be another arbitrary ratio) DynamicRatio { // Dynamic ratio calculation (delegated to external contract) contract_addr: "<TWAP Oracle wrapper contract address>", params: "base64-encoded arbitrary payload to send in addition to the denoms" } } // Standard query & response for contract computing a dynamic ratio // for the Splitter & Reverse Splitter libraries. #[cw_serde] #[derive(QueryResponses)] pub enum DynamicRatioQueryMsg { #[returns(DynamicRatioResponse)] DynamicRatio { denoms: Vec<String>, params: String, } } #[cw_serde] // Response returned by the external contract for a dynamic ratio struct DynamicRatioResponse { pub denom_ratios: HashMap<String, Decimal>, } }
Neutron Interchain Querier
Neutron Interchain Querier library enables Valence Programs to configure and carry out
KV-based queries enabled by the interchainqueries
module on Neutron.
Prerequisites
Active Neutron ICQ relayer
This library requires active Neutron ICQ Relayers operating on the specified target route.
Valence Middleware broker
Each KV-based query requires a correctly encoded key in order to be registered. This library obtains the query keys from Valence Middleware brokers, which expose particular type registries.
For a given KV-query to be performed, the underlying type registry must implement IcqIntegration
trait
which in turn enables the following functionality:
get_kv_key
, enabling the ability to get the correctly encodedKVKey
for query registrationdecode_and_reconstruct
, allowing to reconstruct the interchain query result
Read more about the given type ICQ integration in the type registry documentation page.
Valence Storage account
Results received and meant for further processing by other libraries will be stored in Storage Accounts. Each instance of Neutron IC querier will be associated with its own storage account.
Query registration fee
Neutron interchainqueries
module is configured to escrow a fee in order to register a query.
The fee parameter is dynamic and can be queried via the interchainqueries
module.
Currently the fee is set to 100000untrn
, but it may change in the future.
Users must ensure that the fee is provided along with the query registration function call.
Query deregistration
Interchain Query escrow payments can be reclaimed by submitting the RemoveInterchainQuery
message.
Only the query owner (this contract) is able to submit this message.
Interchain Queries should be removed after they are no longer needed, however, that moment may be different for each Valence Program depending on its configuration.
Background on the interchainqueries
module
Query Registration Message types
Interchain queries can be registered and unregistered by submitting the following neutron-sdk
messages:
#![allow(unused)] fn main() { pub enum NeutronMsg { // other variants RegisterInterchainQuery { /// **query_type** is a query type identifier ('tx' or 'kv' for now). query_type: String, /// **keys** is the KV-storage keys for which we want to get values from remote chain. keys: Vec<KVKey>, /// **transactions_filter** is the filter for transaction search ICQ. transactions_filter: String, /// **connection_id** is an IBC connection identifier between Neutron and remote chain. connection_id: String, /// **update_period** is used to say how often the query must be updated. update_period: u64, }, RemoveInterchainQuery { query_id: u64, }, } }
where the KVKey
is defined as follows:
#![allow(unused)] fn main() { pub struct KVKey { /// **path** is a path to the storage (storage prefix) where you want to read value by key (usually name of cosmos-packages module: 'staking', 'bank', etc.) pub path: String, /// **key** is a key you want to read from the storage pub key: Binary, } }
RegisterInterchainQuery
variant can be applied for both TX- and KV-based queries.
Given that this library is meant for dealing with KV-based queries exclusively,
transactions_filter
field is irrelevant.
This library constructs the query registration message as follows:
#![allow(unused)] fn main() { // helper let kv_registration_msg = NeutronMsg::register_interchain_query( QueryPayload::KV(vec![query_kv_key]), "connection-3".to_string(), 5, ); // which translates to: let kv_registration_msg = NeutronMsg::RegisterInterchainQuery { query_type: QueryType::KV.into(), keys: vec![query_kv_key], transactions_filter: String::new(), connection_id: "connection-3".to_string(), update_period: 5, } }
query_kv_key
here is obtained by querying the associated Middleware Broker for a given type and query parameters.
Query Result Message types
After a query is registered and fetched back to Neutron, its results can be queried with the following Neutron query:
#![allow(unused)] fn main() { pub enum NeutronQuery { /// Query a result of registered interchain query on remote chain InterchainQueryResult { /// **query_id** is an ID registered interchain query query_id: u64, }, // other types } }
which will return the interchain query result:
#![allow(unused)] fn main() { pub struct InterchainQueryResult { /// **kv_results** is a raw key-value pairs of query result pub kv_results: Vec<StorageValue>, /// **height** is a height of remote chain pub height: u64, #[serde(default)] /// **revision** is a revision of remote chain pub revision: u64, } }
where StorageValue
is defined as:
#![allow(unused)] fn main() { /// Describes value in the Cosmos-SDK KV-storage on remote chain pub struct StorageValue { /// **storage_prefix** is a path to the storage (storage prefix) where you want to read /// value by key (usually name of cosmos-packages module: 'staking', 'bank', etc.) pub storage_prefix: String, /// **key** is a key under which the **value** is stored in the storage on remote chain pub key: Binary, /// **value** is a value which is stored under the **key** in the storage on remote chain pub value: Binary, } }
Interchain Query lifecycle
After RegisterInterchainQuery
message is submitted, interchainqueries
module will deduct
the query registration fee from the caller.
At that point the query is assigned its unique query_id
identifier, which is not known in advance.
This identifier is returned to the caller in the reply.
Once the query is registered, the interchain query relayers perform the following steps:
- fetch the specified value from the target domain
- post the query result to
interchainqueries
module - trigger
SudoMsg::KVQueryResult
endpoint on the contract that registered the query
SudoMsg::KVQueryResult
does not carry back the actual query result. Instead, it posts back
a query_id
of the query which had been performed, announcing that its result is available.
Obtained query_id
can then be used to query the interchainqueries
module for the raw
interchainquery result. One thing to note here is that these raw results are not meant to be
(natively) interpreted by foreign VMs; instead, they will adhere to the encoding schemes of
the origin domain.
Library high-level flow
At its core, this library should enable three key functions:
- initiating the interchain queries
- receiving & postprocessing the query results
- reclaiming the escrowed fees by unregistering the queries
Considering that Valence Programs operate across different VMs and adhere to their rules, these functions can be divided into two categories:
- external operations (Valence <> host VM)
- internal operations (Valence <> Valence)
From this perspective, query initiation, receival, and termination can be seen as external
operations that adhere to the functionality provided by the interchainqueries
module on Neutron.
On the other hand, query result postprocessing involves internal Valence Program operations. KV-Query query results fetched from remote domains are not readily useful within the Valence scope because of their encoding formats. Result postprocessing is therefore about adapting remote domain data types into canonical Valence Protocol data types that can be reasoned about.
For most Cosmos SDK based chains, KV-storage values are encoded in protobuf. Interpreting protobuf from within CosmWasm context is not straightforward and requires explicit conversion steps. Other domains may store their state in other encoding formats. This library does not make any assumptions about the different encoding schemes that remote domains may be subject to - instead, that responsibility is handed over to Valence Middleware.
Final step in result postprocessing is about persisting the canonicalized query results. Resulting Valence Types are written into a Storage Account, making it available for further processing, interpretation, or other types of processing.
Library Lifecycle
With the baseline functionality in mind, there are a few design decisions that shape the overall lifecycle of this library.
Instantiation flow
Neutron Interchain Querier is instantiated with the full configuration needed to initiate and process the queries that it will be capable of executing. After instantiation, the library has the full context needed to carry out its functions.
Library is configured with the following LibraryConfig
. Further sections
will focus on each of its fields.
#![allow(unused)] fn main() { pub struct LibraryConfig { pub storage_account: LibraryAccountType, pub querier_config: QuerierConfig, pub query_definitions: BTreeMap<String, QueryDefinition>, } }
Storage Account association
Like other libraries, Neutron IC querier has a notion of its associated account.
Associated Storage account will authorize libraries like Neutron IC Querier to persist canonical Valence types under its storage.
Unlike most other libraries, IC querier does not differentiate between input and output accounts. There is just an account, and it is the only account that this library will be authorized to post its results into.
Storage account association follows the same logic of approving/revoking
libraries. Its configuration is done via LibraryAccountType
, following
the same account pattern as other libraries.
Global configurations that apply to all queries
While this library is capable of carrying out an arbitrary number of distinct
interchain queries, their scope is bound by QuerierConfig
QuerierConfig
describes ICQ parameters that will apply to every query to be
managed by this library. It can be seen as the global configuration parameters,
of which there are two:
#![allow(unused)] fn main() { pub struct QuerierConfig { pub broker_addr: String, pub connection_id: String, } }
connection_id
here describes the IBC connection between Neutron and the
target domain. This effectively limits each instance of Neutron IC Querier to
be responsible for querying one particular domain.
broker_addr
describes the address of the associated middleware broker.
Just as all queries are going to be bound by a particular connection id,
they will also be postprocessed using a single broker instance.
Query configurations
Queries to be carried out by this library are configured with the following type:
#![allow(unused)] fn main() { pub struct QueryDefinition { pub registry_version: Option<String>, pub type_url: String, pub update_period: Uint64, pub params: BTreeMap<String, Binary>, pub query_id: Option<u64>, } }
registry_version: Option<String>
specifies which version of the type registry the middleware broker should use. When set toNone
, the broker uses its latest available type registry version. Set this field when a specific type registry version is needed instead of the latest one.type_url: String
identifies the query type within the type registry (via broker). An important thing to note here is that this url may differ from the one used to identify the target type on its origin domain. This decoupling is done intentionally in order to allow for flexible type mapping between domains when necessary.update_period: Uint64
specifies how often the given query should be performed/updatedparams: BTreeMap<String, Binary>
provides the type registry with the base64 encoded query parameters that are going to be used forKVKey
constructionquery_id: Option<u64>
is an internal parameter that gets modified during runtime. It must be set toNone
when configuring the library.
Every query definition must be associated with a unique string-based identifier (key).
Query definitions are passed to the library config via BTreeMap<String, QueryDefinition>
,
which ensures that there is only one QueryDefinition
for every key. While these
keys can be anything, they should clearly identify a particular query. Every function
call exposed by this library expects these keys (and only these keys) as their arguments.
Execution flow
With Neutron IC Querier instantiated, the library is ready to start carrying out the queries.
Query registration
Configured queries can be registered with the following function:
#![allow(unused)] fn main() { RegisterKvQuery { target_query: String } }
Query registration flow consists of the following steps:
- querying the
interchainqueries
module for the currently set query registration fee and asserting that the function caller covered all expected fees - querying the middleware broker to obtain the
KVKey
value to be used in ICQ registration - constructing and firing the ICQ registration message
Each configured query can be started with this function call.
Query result processing
Interchain Query results are delivered to the interchainqueries
module
in an asynchronous manner. To ensure that query results are available to
Valence Programs as fresh as possible, this library leverages sudo
callbacks
that are triggered after ICQ relayers post back the results for a query
registered by this library.
This entry point is configured as follows:
#![allow(unused)] fn main() { pub fn sudo(deps: ExecuteDeps, _env: Env, msg: SudoMsg) -> StdResult<Response<NeutronMsg>> { match msg { // this is triggered by the ICQ relayer delivering the query result SudoMsg::KVQueryResult { query_id } => handle_sudo_kv_query_result(deps, query_id), _ => Ok(Response::default()), } } }
This function call triggers a set of actions that will process the raw query result into a canonical Valence Type before storing it into the associated Storage account:
- query the
interchainqueries
module to obtain the raw query result associated with the givenquery_id
- query the broker to deserialize the proto-encoded result into a Rust type
- query the broker to canonicalize the native rust type into
ValenceType
- post the resulting canonical type to the associated storage account
After these actions, the associated storage account will hold the adapted query result in its storage on the same block as the result was brought into Neutron.
Query deregistration
Actively registered queries can be removed from the active query set with the following function:
#![allow(unused)] fn main() { DeregisterKvQuery { target_query: String } }
This function will perform two actions.
First it will query the interchainqueries
module on Neutron for the target_query
.
This is done in order to find the deposit fee that was escrowed upon query
registration.
Next, the library will submit the query removal request to the interchainqueries
module. If this request is successful, the deposit fee tokens will be transferred
to the sender that initiated this function.
Library in Valence Programs
Neutron IC Querier does not behave as a standard library in that it does result in any fungible outcome. Instead, it produces a data object in the form of Valence Type.
While that result could be posted directly to the state of this library, instead, it is posted to an associated output account meant for storing data. Just as some other libraries have a notion of input accounts that grant them the permission of executing some logic, Neutron IC Querier has a notion of an associated account which grants the querier a permission to writing some data into its storage slots.
For example, consider a situation where this library had queried the balance of
some remote account, parsed the response into a Valence Balance type, and wrote
that resulting object into its associated storage account. That same associated
account may be the input account of some other library, which will attempt to
perform its function based on the content written to its input account. This may
involve something along the lines of: if balance > 0, do x; otherwise, do y;
.
With that, the IC Querier flow in a Valence Program may look like this:
--- title: Neutron IC Querier in Valence Programs --- graph LR A[neutron IC querier] -->|post Valence type| B(storage account) C[other library] -->|interpret Valence type| B
Valence Middleware is being actively developed. More elaborate examples of this library will be added here in the future.
Valence Drop Liquid Staker library
The Valence Drop Liquid Staker library allows to liquid stake an asset from an input account in the Drop protocol and deposit the liquid staking derivate into the output account. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.
High-level flow
--- title: Drop Liquid Staker Library --- graph LR IA((Input Account)) CC((Drop Core Contract)) OA((Output Account)) P[Processor] S[Drop Liquid Staker Library] P -- 1/Liquid Stake --> S S -- 2/Query balance --> IA S -- 3/Do Liquid Stake funds --> IA IA -- 4/Liquid Stake funds --> CC CC -- 5/Send LS derivative --> OA
Functions
Function | Parameters | Description |
---|---|---|
LiquidStake | ref (Optional): referral address | Liquid stakes the balance of the input account into the drop core contract and deposits LS derivative into the output account. |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { pub input_addr: LibraryAccountType, pub output_addr: LibraryAccountType, // Address of the liquid staker contract (drop core contract) pub liquid_staker_addr: String, // Denom of the asset we are going to liquid stake pub denom: String, } }
Valence Drop Liquid Unstaker library
The Valence Drop Liquid Unstaker library allows liquid staked tokens (e.g., dNTRN or dATOM) to be redeemed for underlying assets (e.g., NTRN or ATOM) through the Drop protocol. The liquid staked asset must be available in the input account. When the library's function to redeem the staked assets (Unstake
) is invoked, the library issues a withdraw request to the Drop protocol generating a tokenized voucher that is held by the input account. This tokenized voucher can be used to claim the underlying assets (represented as an NFT). Note that the underlying assets are not withdrawn immediately, as the Drop protocol unstakes assets asynchronously. At a later time, when the underlying assets are available for withdrawal, the library's claim function can be invoked with the voucher as an argument. This function will withdraw the underlying assets and deposit them into the output account.
High-level flow
--- title: Drop Liquid Unstaker Library - Unstake Flow --- graph LR IA((Input Account)) CC((Drop Core Contract)) P2[Processor] S2[Drop Liquid Unstaker Library] P2 -- "1/Unstake" --> S2 S2 -- "2/Query balance" --> IA S2 -- "3/Do Unstake funds" --> IA IA -- "4/Unstake funds" --> CC CC -- "5/Send NFT voucher" --> IA
--- title: Drop Liquid Unstaker Library - Withdraw Flow --- graph LR IA((Input Account)) WW((Withdrawal Manager Contract)) P1[Processor] S1[Drop Liquid Unstaker Library] OA((Output Account)) P1 -- "1/Withdraw (token_id)" --> S1 S1 -- "2/Check ownership" --> IA S1 -- "3/Do Withdraw" --> IA IA -- "4/Send NFT voucher with ReceiveMsg" --> WW WW -- "5/Send unstaked funds" --> OA
Functions
Function | Parameters | Description |
---|---|---|
Unstake | Unstakes the balance of the input account from the drop core contract and deposits the voucher into the input account. | |
Withdraw | token_id | Withdraws the voucher with token_id identifier from the input account and deposits the unstaked assets into the output account. |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { pub input_addr: LibraryAccountType, pub output_addr: LibraryAccountType, // Address of the liquid unstaker contract (drop core contract) pub liquid_unstaker_addr: String, // Address of the withdrawal_manager_addr (drop withdrawal manager) pub withdrawal_manager_addr: String, // Address of the voucher NFT contract that we get after unstaking and we use for the withdraw pub voucher_addr: String, // Denom of the asset we are going to unstake pub denom: String, } }
Valence ICA CCTP Transfer Library
The Valence ICA CCTP Transfer Library library allows remotely executing a CCTP transfer using a Valence interchain account on Noble Chain. It does that by remotely sending a MsgDepositForBurn to the ICS-27 ICA created by the Valence interchain account on Noble. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Valence ICA CCTP Transfer Library.
High-level flow
--- title: ICA CCTP Transfer Library --- graph LR subgraph Neutron P[Processor] L[ICA CCTP Transfer Library] I[Input Account] P -- 1)Transfer --> L L -- 2)Query ICA address --> I L -- 3)Do ICA MsgDepositForBurn --> I end subgraph Noble ICA[Interchain Account] I -- 4)Execute MsgDepositForBurn--> ICA end
Functions
Function | Parameters | Description |
---|---|---|
Transfer | - | Transfer funds with CCTP on Noble from the ICA created by the input_acount to a mint_recipient on a destination_domain_id |
Configuration
The library is configured on instantiation via the LibraryConfig
type.
#![allow(unused)] fn main() { pub struct LibraryConfig { // Address of the input account (Valence interchain account) pub input_addr: LibraryAccountType, // Amount that is going to be transferred pub amount: Uint128, // Denom that is going to be transferred pub denom: String, // Destination domain id pub destination_domain_id: u32, // This address is the bytes representation of the address (with 32 length and padded zeroes) // For more information, check https://docs.noble.xyz/cctp/mint#example pub mint_recipient: Binary, } }
EVM Libraries
This section contains a detailed description of all the libraries that can be used in EVM Execution Environments.
Valence Forwarder Library
The Valence Forwarder library allows to continuously forward funds from an input account to an output account, following some time constraints. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.
High-level flow
--- title: Forwarder Library --- graph LR IA((Input Account)) OA((Output Account)) P[Processor] S[Forwarder Library] P -- 1/Forward --> S S -- 2/Query balances --> IA S -- 3/Do Send funds --> IA IA -- 4/Send funds --> OA
Functions
Function | Parameters | Description |
---|---|---|
Forward | - | Forward funds from the configured input account to the output account, according to the forwarding configs & min interval. |
Configuration
The library is configured on deployment using the ForwarderConfig
type.
/**
* @dev Configuration for a single token forwarding rule
* @param tokenAddress Address of token to forward (0x0 for native coin)
* @param maxAmount Maximum amount to forward per execution
*/
struct ForwardingConfig {
address tokenAddress;
uint128 maxAmount;
}
/**
* @dev Interval type for forwarding: time-based or block-based
*/
enum IntervalType {
TIME,
BLOCKS
}
/**
* @dev Main configuration struct
* @param inputAccount Source account
* @param outputAccount Destination account
* @param forwardingConfigs Array of token forwarding rules
* @param intervalType Whether to use time or block intervals
* @param minInterval Minimum interval between forwards
*/
struct ForwarderConfig {
Account inputAccount;
Account outputAccount;
ForwardingConfig[] forwardingConfigs;
IntervalType intervalType;
uint64 minInterval;
}
/**
* @dev Tracks last execution time/block
*/
struct LastExecution {
uint64 blockHeight;
uint64 timestamp;
}
Valence CCTP Transfer library
The Valence CCTP Transfer library allows to transfer funds from an input account to a mint recipient using the Cross-Chain Transfer Protocol (CCTP) v1. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the CCTP Transfer library.
High-level flow
--- title: CCTP Transfer Library --- graph LR IA((Input Account)) CCTPR((CCTP Relayer)) MR((Mint Recipient)) TM((CCTP Token Messenger)) P[Processor] S[CCTP Transfer Library] subgraph DEST[ Destination Domain ] CCTPR -- 7/Mint tokens --> MR end subgraph EVM[ EVM Domain ] P -- 1/Forward --> S S -- 2/Query balances --> IA S -- 3/Do approve and call depositForBurn --> IA IA -- 4/ERC-20 approve --> TM IA -- 5/Call depositForBurn --> TM TM -- 6/Burn tokens and emit event --> TM end EVM --- DEST
Functions
Function | Parameters | Description |
---|---|---|
Transfer | - | Transfer funds from the configured input account to the mint recipient on the destination domain. |
Configuration
The library is configured on deployment using the CCTPTransferConfig
type. A list of the supported CCTP destination domains that can be used in the destinationDomain
field can be found here.
/**
* @dev Configuration struct for token transfer parameters.
* @param amount The number of tokens to transfer. If set to 0, the entire balance is transferred.
* @param mintRecipient The recipient address (in bytes32 format) on the destination chain where tokens will be minted.
* @param inputAccount The account from which tokens will be debited.
* @param destinationDomain The domain identifier for the destination chain.
* @param cctpTokenMessenger The CCTP Token Messenger contract.
* @param transferToken The ERC20 token address that will be transferred.
*/
struct CCTPTransferConfig {
uint256 amount; // If we want to transfer all tokens, we can set this to 0.
bytes32 mintRecipient;
Account inputAccount;
uint32 destinationDomain;
ITokenMessenger cctpTokenMessenger;
address transferToken;
}
Middleware
This section contains a description of the Valence Protocol middleware design.
Valence Protocol Middleware components:
Middleware Broker
Middleware broker acts as an app-level integration gateway in Valence Programs. Integration here is used rather ambiguously on purpose - brokers should remain agnostic to the primitives being integrated into Valence Protocol. These primitives may involve but not be limited to:
- data types
- functions
- encoding schemes
- any other distributed system building blocks that may be implemented differently
Problem statement
Valence Programs can be configured to span over multiple domains and last for an indefinite duration of time.
Domains integrated into Valence Protocol are sovereign and evolve on their own.
Middleware brokers provide the means to live with these differences by enabling various primitive conversions to be as seamless as possible. Seamless here primarily refers to causing no downtime to bring a given primitive up-to-date, and making the process of doing so as easy as possible for the developers.
To visualize a rather complex instance of this problem, consider the following situation. A Valence Program is initialized to continuously query a particular type from a remote domain, modify some of its values, and send the altered object back to the remote domain for further actions. At some point during the runtime, remote domain performs an upgrade which extends the given type with additional fields. The Valence Program is unaware of this upgrade and continues with its order of operations. However, the type in question from the perspective of the Valence Program had drifted and is no longer representative of its origin domain.
Among other things, Middleware brokers should enable such programs to gracefully recover into a synchronized state that can continue operating in a correct manner.
Broker Lifecycle
Brokers are singleton components that are instantiated before the program start time.
Valence Programs refer to their brokers of choice by their respective addresses.
This means that the same broker instance for a particular domain could be used across many Valence Programs.
Brokers maintain their set of type registries and index
them by semver
. New type registries can be added to the broker during runtime.
While programs have the freedom to select a particular version of a type registry
to be used for a given request, by default, the most up to date type registry is used.
Two aforementioned properties reduce the amount of work needed to upkeep the integrations across active Valence Programs: updating one broker with the latest version of a given domain will immediately become available for all Valence Programs using it.
API
Broker interface is agnostic to the type registries it indexes. A single query is exposed:
#![allow(unused)] fn main() { pub struct QueryMsg { pub registry_version: Option<String>, pub query: RegistryQueryMsg, } }
This query message should only change in situations where it may become limiting.
After receiving the query request, broker will relay the contained RegistryQueryMsg
to the correct type registry, and return the result to the caller.
Middleware Type Registry
Middleware type registries are static components that define how primitives external to the Valence Protocol are adapted to be used within Valence programs.
While type registries can be used independently, they are typically meant to be registered into and used via brokers to ensure versioning is kept up to date.
Type Registry lifecycle
Type Registries are static contracts that define their primitives during compile time.
Once a registry is deployed, it is expected to remain unchanged. If a type change is needed, a new registry should be compiled, deployed, and registered into the broker to offer the missing or updated functionality.
API
All type registry instances must implement the same interface defined in middleware-utils.
Type registries function in a read-only manner - all of their functionality is exposed
with the RegistryQueryMsg
. Currently, the following primitive conversions are enabled:
#![allow(unused)] fn main() { pub enum RegistryQueryMsg { /// serialize a message to binary #[returns(NativeTypeWrapper)] FromCanonical { obj: ValenceType }, /// deserialize a message from binary/bytes #[returns(Binary)] ToCanonical { type_url: String, binary: Binary }, /// get the kvkey used for registering an interchain query #[returns(KVKey)] KVKey { type_id: String, params: BTreeMap<String, Binary>, }, #[returns(NativeTypeWrapper)] ReconstructProto { type_id: String, icq_result: InterchainQueryResult, }, } }
RegistryQueryMsg
can be seen as the superset of all primitives that Valence Programs
can expect. No particular type being integrated into the system is required to implement
all available functionality, although that is possible.
To maintain a unified interface across all type registries, they have to adhere to the same
API as all other type registries. This means that if a particular type is enabled in a type
registry and only provides the means to perform native <-> canonical conversion, attempting
to call ReconstructProto
on that type will return an error stating that reconstructing
protobuf for this type is not enabled.
Module organization
Primitives defined in type registries should be outlined in a domain-driven manner. Types, encodings, and any other functionality should be grouped by their domain and are expected to be self-contained, not leaking into other primitives.
For instance, an osmosis type registry is expected to contain all registry instances related to
the Osmosis domain. Different registry instances should be versioned by semver
, following that
of the external domain of which the primitives are being integrated.
Enabled primitives
Currently, the following type registry primitives are enabled:
- Neutron Interchain Query types:
- reconstructing native types from protobuf
- obtaining the
KVKey
used to initiate the query for a given type
- Valence Canonical Types:
- reconstructing native types from Valence Types
- mapping native types into Valence Types
Example integration
For an example, consider the integration of the osmosis gamm pool.
Neutron Interchain Query integration
Neutron Interchain Query integration for a given type is achieved by implementing
the IcqIntegration
trait:
#![allow(unused)] fn main() { pub trait IcqIntegration { fn get_kv_key(params: BTreeMap<String, Binary>) -> Result<KVKey, MiddlewareError>; fn decode_and_reconstruct( query_id: String, icq_result: InterchainQueryResult, ) -> Result<Binary, MiddlewareError>; } }
get_kv_key
Implementing the get_kv_key
will provide the means to obtain the KVKey
needed
to register the interchain query. For osmosis gamm pool, the implementation may look
like this:
#![allow(unused)] fn main() { impl IcqIntegration for OsmosisXykPool { fn get_kv_key(params: BTreeMap<String, Binary>) -> Result<KVKey, MiddlewareError> { let pool_prefix_key: u8 = 0x02; let id: u64 = try_unpack_domain_specific_value("pool_id", ¶ms)?; let mut pool_access_key = vec![pool_prefix_key]; pool_access_key.extend_from_slice(&id.to_be_bytes()); Ok(KVKey { path: STORAGE_PREFIX.to_string(), key: Binary::new(pool_access_key), }) } } }
decode_and_reconstruct
Other part of enabling interchain queries is the implementation of decode_and_reconstruct
.
This method will be called upon ICQ relayer posting the query result back to the interchainqueries
module on Neutron. For osmosis gamm pool, the implementation may look
like this:
#![allow(unused)] fn main() { impl IcqIntegration for OsmosisXykPool { fn decode_and_reconstruct( _query_id: String, icq_result: InterchainQueryResult, ) -> Result<Binary, MiddlewareError> { let any_msg: Any = Any::decode(icq_result.kv_results[0].value.as_slice()) .map_err(|e| MiddlewareError::DecodeError(e.to_string()))?; let osmo_pool: Pool = any_msg .try_into() .map_err(|_| StdError::generic_err("failed to parse into pool"))?; to_json_binary(&osmo_pool) .map_err(StdError::from) .map_err(MiddlewareError::Std) } } }
Valence Type integration
Valence Type integration for a given type is achieved by implementing
the ValenceTypeAdapter
trait:
#![allow(unused)] fn main() { pub trait ValenceTypeAdapter { type External; fn try_to_canonical(&self) -> Result<ValenceType, MiddlewareError>; fn try_from_canonical(canonical: ValenceType) -> Result<Self::External, MiddlewareError>; } }
Ideally, Valence Types should represent the minimal amount of information needed and avoid any domain-specific logic or identifiers. In practice, this is a hard problem: native types that are mapped into Valence types may need to be sent back to the remote domains. For that reason, we cannot afford leaking any domain-specific fields and instead store them in the Valence Type itself for later reconstruction.
In case of ValenceXykPool
, this storage is kept in its domain_specific_fields
field.
Any fields that are logically common across all possible integrations into this type
should be kept in their dedicated fields. In the case of constant product pools, such
fields are the assets in the pool, and the shares issued that represent those assets:
#![allow(unused)] fn main() { #[cw_serde] pub struct ValenceXykPool { /// assets in the pool pub assets: Vec<Coin>, /// total amount of shares issued pub total_shares: String, /// any other fields that are unique to the external pool type /// being represented by this struct pub domain_specific_fields: BTreeMap<String, Binary>, } }
try_to_canonical
Implementing the try_from_canonical
will provide the means of mapping a native remote type
into the canonical Valence Type to be used in Valence Protocol.
For osmosis gamm pool, the implementation may look like this:
#![allow(unused)] fn main() { impl ValenceTypeAdapter for OsmosisXykPool { type External = Pool; fn try_to_canonical(&self) -> Result<ValenceType, MiddlewareError> { // pack all the domain-specific fields let mut domain_specific_fields = BTreeMap::from([ (ADDRESS_KEY.to_string(), to_json_binary(&self.0.address)?), (ID_KEY.to_string(), to_json_binary(&self.0.id)?), ( FUTURE_POOL_GOVERNOR_KEY.to_string(), to_json_binary(&self.0.future_pool_governor)?, ), ( TOTAL_WEIGHT_KEY.to_string(), to_json_binary(&self.0.total_weight)?, ), ( POOL_PARAMS_KEY.to_string(), to_json_binary(&self.0.pool_params)?, ), ]); if let Some(shares) = &self.0.total_shares { domain_specific_fields .insert(SHARES_DENOM_KEY.to_string(), to_json_binary(&shares.denom)?); } for asset in &self.0.pool_assets { if let Some(token) = &asset.token { domain_specific_fields.insert( format!("pool_asset_{}_weight", token.denom), to_json_binary(&asset.weight)?, ); } } let mut assets = vec![]; for asset in &self.0.pool_assets { if let Some(t) = &asset.token { assets.push(coin(u128::from_str(&t.amount)?, t.denom.to_string())); } } let total_shares = self .0 .total_shares .clone() .map(|shares| shares.amount) .unwrap_or_default(); Ok(ValenceType::XykPool(ValenceXykPool { assets, total_shares, domain_specific_fields, })) } } }
try_from_canonical
Other part of enabling Valence Type integration is the implementation of try_from_canonical
.
This method will be called when converting from canonical back to the native version of the types.
For osmosis gamm pool, the implementation may look like this:
#![allow(unused)] fn main() { impl ValenceTypeAdapter for OsmosisXykPool { type External = Pool; fn try_from_canonical(canonical: ValenceType) -> Result<Self::External, MiddlewareError> { let inner = match canonical { ValenceType::XykPool(pool) => pool, _ => { return Err(MiddlewareError::CanonicalConversionError( "canonical inner type mismatch".to_string(), )) } }; // unpack domain specific fields from inner type let address: String = inner.get_domain_specific_field(ADDRESS_KEY)?; let id: u64 = inner.get_domain_specific_field(ID_KEY)?; let future_pool_governor: String = inner.get_domain_specific_field(FUTURE_POOL_GOVERNOR_KEY)?; let pool_params: Option<PoolParams> = inner.get_domain_specific_field(POOL_PARAMS_KEY)?; let shares_denom: String = inner.get_domain_specific_field(SHARES_DENOM_KEY)?; let total_weight: String = inner.get_domain_specific_field(TOTAL_WEIGHT_KEY)?; // unpack the pool assets let mut pool_assets = vec![]; for asset in &inner.assets { let pool_asset = PoolAsset { token: Some(Coin { denom: asset.denom.to_string(), amount: asset.amount.into(), }), weight: inner .get_domain_specific_field(&format!("pool_asset_{}_weight", asset.denom))?, }; pool_assets.push(pool_asset); } Ok(Pool { address, id, pool_params, future_pool_governor, total_shares: Some(Coin { denom: shares_denom, amount: inner.total_shares, }), pool_assets, total_weight, }) } } }
Valence Types
Valence Types are a set of canonical type wrappers to be used inside Valence Programs.
Primary operational domain of Valence Protocol will need to consume, interpret, and otherwise manipulate data from external domains. For that reason, canonical representations of such types are defined in order to form an abstraction layer that all Valence Programs can reason about.
Canonical Type integrations
Canonical types to be used in Valence Programs are enabled by the Valence Protocol.
For instance, consider Astroport XYK and Osmosis GAMM pool types. These are two distinct data types that represent the same underlying concept - a constant product pool.
These types can be unified in the Valence Protocol context by being mapped to and from the following Valence Type definition:
#![allow(unused)] fn main() { pub struct ValenceXykPool { /// assets in the pool pub assets: Vec<Coin>, /// total amount of shares issued pub total_shares: String, /// any other fields that are unique to the external pool type /// being represented by this struct pub domain_specific_fields: BTreeMap<String, Binary>, } }
For a remote type to be integrated into the Valence Protocol means that there are available adapters that map between the canonical and original type definitions.
These adapters can be implemented by following the design outlined by type registries.
Active Valence Types
Active Valence types provide the interface for integrating remote domain representations of the same underlying concepts. Remote types can be integrated into Valence Protocol if and only if there is an enabled Valence Type representing the same underlying primitive.
TODO: start a dedicated section for each Valence Type
Currently enabled Valence types are:
- XYK pool
- Balance response
Valence Asserter
Valence Asserters provide the means to assert boolean conditions about Valence Types.
Each Valence Type variant may provide different assertion queries. To offer a unified API, Valence Asserter remains agnostic to the underlying type being queried and provides a common gateway to all available types.
Motivation
Primary use case for Valence Type assertions is to enable conditional execution of functions. A basic example of this may be expressed as "provide liquidity if and only if the pool price is greater than X".
While specific conditions like this could be internalized in each function that is to be executed, Valence Asserter aims to:
- enable such assertions to be performed prior to any library function (system level)
- not limit the assertions to a particular condition (generalize)
With the following goals satisfied, arbitrary assertions can be performed on the processor level.
Each function call that the configured program wishes to execute only if a certain condition is met can then
be placed in a message batch and prepended with an assertion message.
This way, when the message batch is being processed, any assertions that do not evaluate to true (return an Err
) will
prevent later messages from executing as expected. If the batch is atomic, the whole batch will abort.
If the batch is non-atomic, various authorization configuration
options will dictate the further behavior.
High-level flow
--- title: Forwarder Library --- graph LR IA((Storage Account)) P[Processor] S[Valence Asserter] P -- 1/Assert --> S S -- 2/Query storage slot(s) --> IA S -- 3/Evaluate the predicate --> S S -- 4/Return OK/ERR --> P
API
Function | Parameters | Description | Return Value |
---|---|---|---|
Assert | a: AssertionValue predicate: Predicate b: AssertionValue | Evaluate the given predicate R(a, b). If a or b are variables, they get fetched using the configuartion specified in the respective QueryInfo .Both a and b must be deserializable into the same type. | - predicate evaluates to true: Ok() - predicate evaluates to false: Err |
Design
Assertions to be performed are expressed as R(a, b), where:
- a and b are values of the same type
- R is the predicate that applies to a and b
Valence Asserter design should enable such predicate evaluations to be performed in a generic manner within Valence Programs.
Assertion values
Assertion values are defined as follows:
#![allow(unused)] fn main() { pub enum AssertionValue { // storage account slot query Variable(QueryInfo), // constant valence primitive value Constant(ValencePrimitive), } }
Two values are required for any comparison. Both a and b can be configured to be obtained in one of two ways:
- Constant value (known before program instantiation)
- Variable value (known during program runtime)
Any combination of these values can be used for a given assertion:
- constant-constant (unlikely)
- constant-variable
- variable-variable
Variable
assertion values
Variable assertion values are meant to be used for information that can only become known during runtime.
Such values will be obtained from Valence Types, which expose their own set of queries.
Valence Types reside in their dedicated storage slots in Storage Accounts.
Valence Asserter uses the following type in order to obtain the Valence Type and query its state:
#![allow(unused)] fn main() { pub struct QueryInfo { // addr of the storage account pub storage_account: String, // key to access the value in the storage account pub storage_slot_key: String, // b64 encoded query pub query: Binary, } }
Constant
assertion values
Constant assertion values are meant to be used for assertions where one of the operands is known before runtime.
Valence Asserter expects constant values to be passed using the ValencePrimitive
enum which wraps around the standard cosmwasm_std
types:
#![allow(unused)] fn main() { pub enum ValencePrimitive { Decimal(cosmwasm_std::Decimal), Uint64(cosmwasm_std::Uint64), Uint128(cosmwasm_std::Uint128), Uint256(cosmwasm_std::Uint256), String(String), } }
Predicates
Predicates R are specified with the following type:
#![allow(unused)] fn main() { pub enum Predicate { LT, // < LTE, // <= EQ, // == GT, // > GTE, // >= } }
In the context of Valence Asserter, the predicate treats a
as the left-hand-side and b
as the right-hand-side variables (a < b
).
While comparison of numeric types is pretty straightforward, it is important to note that string predicates are evaluated in lexicographical order and are case sensitive:
- "Z" < "a"
- "Assertion" != "assertion"
Example
Consider that a Valence Program wants to provide liquidity to a liquidity pool if and only if
the pool price is above 10.0
.
Pool price can be obtained by querying a ValenceXykPool
variant which exposes the following query:
#![allow(unused)] fn main() { ValenceXykQuery::GetPrice {} -> Decimal }
The program is configured to store the respective ValenceXykPool
in a Storage Account with address
neutron123...
, under storage slot pool
.
Filling in the blanks of R(a, b), we have:
- variable
a
is obtained with theGetPrice {}
query ofneutron123...
storage slotpool
- predicate
R
is known in advance:>
- constant
b
is known in advance:10.0
Thefore the assertion message may look as follows:
"assert": {
"a": {
"variable": {
"storage_account": "neutron123...",
"storage_slot": "pool",
"query": b64("GetPrice {}"),
}
},
"predicate": Predicate::GT,
"b": {
"constant": "10.0",
},
}
Examples
Here are some examples of Valence Programs that you can use to get started.
Token Swap Program
This example demonstrates a simple token swap program whereby two parties wish to exchange specific amounts of (different) tokens they each hold, at a rate they have previously agreed on. The program ensures the swap happens atomically, so neither party can withdraw without completing the trade.
--- title: Valence token swap program --- graph LR InA((Party A Deposit)) InB((Party B Deposit)) OutA((Party A Withdraw)) OutB((Party B Withdraw)) SSA[Splitter A] SSB[Splitter B] subgraph Neutron InA --> SSA --> OutB InB --> SSB --> OutA end
The program is composed of the following components:
- Party A Deposit account: a Valence Base account which Party A will deposit their tokens into, to be exchanged with Party B's tokens.
- Splitter A: an instance of the Splitter library that will transfer Party A's tokens from its input account (i.e. the Party A Deposit account) to its output account (i.e. the Party B Withdraw account) upon execution of its
split
function. - Party B Withdraw account: the account from which Party B can withdraw Party A's tokens after the swap has successfully completed. Note: this can be a Valence Base account, but it could also be a regular chain account, or a smart contract.
- Party B Deposit account: a Valence Base account which Party B will deposit their funds into, to be exchanged with Party A's funds.
- Splitter B: an instance of the Splitter library that will transfer Party B's tokens from its input account (i.e. the Party B Deposit account) to its output account (i.e. the Party A Withdraw account) upon execution of its
split
function. - Party A Withdraw account: the account from which Party A can withdraw Party B's tokens after the swap has successfully completed. Note: this can be a Valence Base account, but it could also be a regular chain account, or a smart contract.
The way the program is able to fulfil the requirement for an atomic exchange of tokens between the two parties is done by implementing an atomic subroutine composed of two function calls:
- Splitter A's
split
function - Splitter B's
split
function
The Authorizations component will ensure that either both succeed, or none is executed, thereby ensuring that funds remain safe at all time (either remaining in the respective deposit accounts, or transferred to the respective withdraw accounts).
Crosschain Vaults
Note: This example is still in the design phase and includes new or experimental features of Valence Programs that may not be supported in the current production release.
Overview
You can use Valence Programs to create crosschain vaults. Users interact with a vault on one chain while the tokens are held on another chain where yield is generated.
Note: In our initial implementation we use Neutron for co-processing and Hyperlane for general message passing between the co-processor and the target domain. Deployment of Valence programs as zk RISC-V co-processors with permissionless message passing will be available in the coming months.
In this example, we have made the following assumptions:
- Users can deposit tokens into a standard ERC-4626 vault on Ethereum.
- ERC-20 shares are issued to users on Ethereum.
- If a user wishes to redeem their tokens, they can issue a withdrawal request which will burn the user's shares when tokens are redeemed.
- The redemption rate that tells us how many tokens can be redeemed per shares is given by: \( R = \frac{TotalAssets}{TotalIssuedShares} = \frac{TotalInVault + TotalInTransit + TotalInPostion}{TotalIssuedShares}\)
- A permissioned actor called the "Strategist" is authorized to transport funds from Ethereum to Neutron where they are locked in some DeFi protocol. And vice-versa, the Strategist can withdraw from the position so the funds are redeemable on Ethereum. The redemption rate must be adjusted by the Strategist accordingly.
--- title: Crosschain Vaults Overview --- graph LR User EV(Ethereum Vault) NP(Neutron Position) User -- Tokens --> EV EV -- Shares --> User EV -- Strategist Transport --> NP NP -- Strategist Transport --> EV
While we have chosen Ethereum and Neutron as examples here, one could similarly construct such vaults between any two chains as long as they are supported by Valence Programs.
Implementing Crosschain Vaults as a Valence Program
Recall that Valence Programs are comprised of Libraries and Accounts. Libraries are a collection of Functions that perform token operations on the Accounts. Since there are two chains here, Libraries and Accounts will exist on both chains.
Since gas is cheaper on Neutron than on Ethereum, computationally expensive operations, such as constraining the Strategist actions will be done on Neutron. Authorized messages will then be executed by each chain's Processor. Hyperlane is used to pass messages from the Authorization contract on Neutron to the Processor on Ethereum.
--- title: Program Control --- graph TD Strategist subgraph Ethereum EP(Processor) EHM(Hyperlane Mailbox) EL(Ethereum Valence Libraries) EVA(Valence Accounts) end subgraph Neutron A(Authorizations) NP(Processor) EE(EVM Encoder) NHM(Hyperlane Mailbox) NL(Neutron Valence Libraries) NVA(Valence Accounts) end Strategist --> A A --> EE --> NHM --> Relayer --> EHM --> EP --> EL --> EVA A --> NP --> NL--> NVA
Libraries and Accounts needed
On Ethereum, we'll need Accounts for:
- Deposit: To hold user deposited tokens. Tokens from this pool can be then transported to Neutron.
- Withdraw: To hold tokens received from Neutron. Tokens from this pool can then be redeemed for shares.
On Neutron, we'll need Accounts for:
- Deposit: To hold tokens bridged from Ethereum. Tokens from this pool can be used to enter into the position on Neutron.
- Position: Will hold the vouchers or shares associated with the position on Neutron.
- Withdraw: To hold the tokens that are withdrawn from the position. Tokens from this pool can be bridged back to Ethereum.
We'll need the following Libraries on Ethereum:
- Bridge Transfer: To transfer funds from the Ethereum Deposit Account to the Neutron Deposit Account.
- Forwarder: To transfer funds between the Deposit and Withdraw Accounts on Ethereum. Two instances of the Library will be required.
We'll need the following Libraries on Neutron:
- Position Depositor: To take funds in the Deposit and create a position with them. The position is held by the Position account.
- Position Withdrawer: To redeem a position for underlying funds that are then transferred to the Withdraw Account on Neutron.
- Bridge Transfer: To transfer funds from the Neutron Withdraw Account to the Ethereum Withdraw Account.
Note that the Accounts mentioned here are the standard Valence Base Accounts. The Bridge Transfer library will depend on the token being transferred, but will offer similar functionality to the IBC Transfer library. The Position Depositor and Withdrawer will depend on the type of position, but can be similar to the Liquidity Provider and Liquidity Withdrawer.
Vault Contract
The Vault contract is a special contract on Ethereum that has an ERC-4626 interface.
User methods to deposit funds
- Deposit: Deposit funds into the registered Deposit Account. Receive shares back based on the redemption rate.
Deposit { amount: Uint256, receiver: String }
- Mint: Mint shares from the vault. Expects the user to provide sufficient tokens to cover the cost of the shares based on the current redemption rate.
Mint { shares: Uint256, receiver: String }
--- title: User Deposit and Share Mint Flow --- graph LR User subgraph Ethereum direction LR EV(Vault) ED((Deposit)) end User -- 1/ Deposit Tokens --> EV EV -- 2/ Send Shares --> User EV -- 3/ Send Tokens --> ED
User methods to withdraw funds
- Redeem: Send shares to redeem assets. This creates a
WithdrawRecord
in a queue. This record is processed at the nextEpoch
Redeem { shares: Uint256, receiver: String, max_loss_bps: u64 }
- Withdraw: Withdraw amount of assets. It expects the user to have sufficient shares. This creates a
WithdrawRecord
in a queue. This record is processed at the nextEpoch
.Withdraw { amount: Uint256, receiver: String, max_loss_bps: u64 }
Withdraws are subject to a lockup period after the user has initiated a redemption. During this time the redemption rate may change. Users can specify an acceptable loss in case the redemption rate decreases using the max_loss_bps
parameter.
After the Epoch
has completed, a user may complete the withdrawal by executing the following message:
- CompleteWithdraw: Pop the
WithdrawRecord
. Pull funds from the Withdraw Account and send to user. Burn the user's deposited shares.
--- title: User Withdraw Flow --- graph RL subgraph Ethereum direction RL EV(Vault) EW((Withdraw)) end EW -- 2/ Send Tokens --> EV -- 3/ Send Tokens --> User User -- 1/ Deposit Shares --> EV
Strategist methods to manage the vault
The vault validates that the Processor is making calls to it. On Neutron, the Authorization contract limits the calls to be made only by a trusted Strategist. The Authorization contract can further constrain when or how Strategist actions can be taken.
- Update: The strategist can update the current redemption rate.
Update { rate: Uint256 }
- Pause and Unpause: The strategist can pause and unpause vault operations.
Pause {}
Program subroutines
The program authorizes the Strategist to update the redemption rate and transport funds between various Accounts.
Allowing the Strategist to transport funds
--- title: From Ethereum Deposit Account to Neutron Position Account --- graph LR subgraph Ethereum ED((Deposit)) ET(Bridge Transfer) end subgraph Neutron NPH((Position Holder)) NPD(Position Depositor) ND((Deposit)) end ED --> ET --> ND --> NPD --> NPH
--- title: From Neutron Position Account to Ethereum Withdraw Account --- graph RL subgraph Ethereum EW((Withdraw)) end subgraph Neutron NPH((Position Holder)) NW((Withdraw)) NT(Bridge Transfer) NPW(Position Withdrawer) end NPH --> NPW --> NW --> NT --> EW
--- title: Between Ethereum Deposit and Ethereum Withdraw Accounts --- graph subgraph Ethereum ED((Deposit)) EW((Withdraw)) FDW(Forwarder) end ED --> FDW --> EW
Design notes
This is a simplified design to demonstrate how a crosschain vault can be implemented with Valence Programs. Production deployments will need to consider additional factors not covered here including:
- Fees for gas, bridging, and for entering/exiting the position on Neutron. It is recommend that the vault impose withdraw fee and platform for users.
- How to constrain Strategist behavior to ensure they set redemption rates correctly.
Testing your programs
Our testing infrastructure is built on several tools that work together to provide a comprehensive local testing environment:
Core Testing Framework
We use local-interchain, a component of the interchaintest developer toolkit. This allows you to deploy and run chains in a local environment, providing a controlled testing space for your blockchain applications.
Localic Utils
To make these tools more accessible in Rust, we've developed localic-utils. This Rust library provides convenient interfaces to interact with the local-interchain testing framework.
Program Manager
We provide a tool called Program Manager
that helps you manage your programs. We've created all the abstractions and helper functions to create your programs more efficiently together with local-interchain.
The Program Manager use is optional, it abstracts a lot of functionality and allows creating programs in much less code. But if you want to have more fine-grained control over your programs, we provide helper functions to create and interact with your programs directly without it. In this section, we'll show you two different examples on how to test your programs, one using the Program Manager and the other without it. There are also many more examples each of them for different use cases. They are all in the examples
folder of our e2e folder.
Initial Testing Set Up
For testing your programs, no matter if you want to use the manager or not, there is a common set up that needs to be done. This set up is necessary to initialize the testing context with all the required information of the local-interchain environment.
1. Setting the TestContext using the TestContextBuilder
The TestContext
is the interchain environment in which your program will run. Let's say you want to configure the Neutron chain and Osmosis chain, you may set it up as follows:
#![allow(unused)] fn main() { let mut test_ctx = TestContextBuilder::default() .with_unwrap_raw_logs(true) .with_api_url(LOCAL_IC_API_URL) .with_artifacts_dir(VALENCE_ARTIFACTS_PATH) .with_chain(ConfigChainBuilder::default_neutron().build()?) .with_chain(ConfigChainBuilder::default_osmosis().build()?) .with_log_file_path(LOGS_FILE_PATH) .with_transfer_channels(NEUTRON_CHAIN_NAME, OSMOSIS_CHAIN_NAME) .build()?; }
This will instantiate a TestContext
with two chains, Neutron and Osmosis, that are connected via IBC by providing the transfer_channels
parameter. The api_url
is the URL of the local-interchain API, and the artifacts_dir
is the path where the compiled programs are stored. The log_file_path
is the path where the logs will be stored. The most important part here are the chains, which are created using the ConfigChainBuilder
with the default configurations for Neutron and Osmosis and the transfer channels between them. We provide builders for most chains but you can also create your own configurations.
2. Custom chain-specific setup
Some chains require additional setup to interact with others. For example, if you are going to use a liquid staking chain like Persistence, you need to register and activate the host zone to allow liquid staking of its native token. We provide helper functions that do this for you, here's an example:
#![allow(unused)] fn main() { info!("Registering host zone..."); register_host_zone( test_ctx .get_request_builder() .get_request_builder(PERSISTENCE_CHAIN_NAME), NEUTRON_CHAIN_ID, &connection_id, &channel_id, &native_denom, DEFAULT_KEY, )?; info!("Activating host zone..."); activate_host_zone(NEUTRON_CHAIN_ID)?; }
Other examples of this would be deploying Astroport contracts, creating Osmosis pools... We provider helper functions for pretty much all of them and we have examples for all of them in the examples
folder.
Example without Program Manager
This example demonstrates how to test your program without the Program Manager after your initial testing set up has been completed as described in the Initial Testing Set Up section.
Use-case: In this particular example, we will show you how to create a program that liquid stakes NTRN tokens on a Persistence chain directly from a base account without the need of using libraries. Note that this example is just for demonstrating purposes. In a real-world scenario, you would not liquid stake NTRN as it is not a staking token. We also are not using a liquid staking library for this example, although one could be creating for this purpose.
The full code for this example can be found in the Persistence Liquid Staking example.
- Set up the Authorization contract and processor on the
Main Domain
(Neutron).
#![allow(unused)] fn main() { let now = SystemTime::now(); let salt = hex::encode( now.duration_since(SystemTime::UNIX_EPOCH)? .as_secs() .to_string(), ); let (authorization_contract_address, _) = set_up_authorization_and_processor(&mut test_ctx, salt.clone())?; }
This code sets up the Authorization contract and processor on Neutron. We use a time based salt to ensure that each test run the generated contract addresses are different. The set_up_authorization_and_processor
function is a helper function instantiates both the Processor and Authorization contracts on Neutron and provides the contract addresses to interact with both. As you can see, we are not using the Processor on Neutron here, but we are still setting it up.
- Set up an external domain and create a channel to start relaying messages.
#![allow(unused)] fn main() { let processor_on_persistence = set_up_external_domain_with_polytone( &mut test_ctx, PERSISTENCE_CHAIN_NAME, PERSISTENCE_CHAIN_ID, PERSISTENCE_CHAIN_ADMIN_ADDR, LOCAL_CODE_ID_CACHE_PATH_PERSISTENCE, "neutron-persistence", salt, &authorization_contract_address, )?; }
This function does the following:
- Instantiates all the Polytone contracts on both the main domain and the new external domain. The information of the external domain is provided in the function arguments.
- Creates a channel between the Polytone contracts that the relayer will use to relay messages between the Authorization contract and the processor.
- Instantiates the Processor contract on the external domain with the correct Polytone information and the Authorization contract address.
- Adds the external domain to Authorization contract with the Polytone information and the processor address on the external domain.
After this is done, we can start creating authorizations for that external domain and when we send messages to the Authorization contract, the relayer will relay the messages to the processor on the external domain and return the callbacks.
- Create one or more base accounts on a domain.
#![allow(unused)] fn main() { let base_accounts = create_base_accounts( &mut test_ctx, DEFAULT_KEY, PERSISTENCE_CHAIN_NAME, base_account_code_id, PERSISTENCE_CHAIN_ADMIN_ADDR.to_string(), vec![processor_on_persistence.clone()], 1, None, ); let persistence_base_account = base_accounts.first().unwrap(); }
This function creates a base account on the external domain and grants permission to the processor address to execute messages on its behalf. If we were using a library instead, we would be granting permission to the library contract instead of the processor address in the array provided.
- Create the authorization
#![allow(unused)] fn main() { let authorizations = vec![AuthorizationBuilder::new() .with_label("execute") .with_subroutine( AtomicSubroutineBuilder::new() .with_function( AtomicFunctionBuilder::new() .with_domain(Domain::External(PERSISTENCE_CHAIN_NAME.to_string())) .with_contract_address(LibraryAccountType::Addr( persistence_base_account.clone(), )) .with_message_details(MessageDetails { message_type: MessageType::CosmwasmExecuteMsg, message: Message { name: "execute_msg".to_string(), params_restrictions: None, }, }) .build(), ) .build(), ) .build()]; info!("Creating execute authorization..."); let create_authorization = valence_authorization_utils::msg::ExecuteMsg::PermissionedAction( valence_authorization_utils::msg::PermissionedMsg::CreateAuthorizations { authorizations }, ); contract_execute( test_ctx .get_request_builder() .get_request_builder(NEUTRON_CHAIN_NAME), &authorization_contract_address, DEFAULT_KEY, &serde_json::to_string(&create_authorization).unwrap(), GAS_FLAGS, ) .unwrap(); std::thread::sleep(std::time::Duration::from_secs(3)); info!("Execute authorization created!"); }
In this code snippet, we are creating an authorization to execute a message on the persistence base account. For this particular example, since we are going to execute a CosmosMsg::Stargate
directly on the account passing the protobuf message, we are not going to set up any param restrictions. If we were using a library, we could potentially set up restrictions for the json message that the library would expect.
- Send message to the Authorization contract
#![allow(unused)] fn main() { info!("Send the messages to the authorization contract..."); let msg_liquid_stake = MsgLiquidStake { amount: Some(Coin { denom: neutron_on_persistence.clone(), amount: amount_to_liquid_stake.to_string(), }), delegator_address: persistence_base_account.clone(), }; #[allow(deprecated)] let liquid_staking_message = CosmosMsg::Stargate { type_url: msg_liquid_stake.to_any().type_url, value: Binary::from(msg_liquid_stake.to_proto_bytes()), }; let binary = Binary::from( serde_json::to_vec(&valence_account_utils::msg::ExecuteMsg::ExecuteMsg { msgs: vec![liquid_staking_message], }) .unwrap(), ); let message = ProcessorMessage::CosmwasmExecuteMsg { msg: binary }; let send_msg = valence_authorization_utils::msg::ExecuteMsg::PermissionlessAction( valence_authorization_utils::msg::PermissionlessMsg::SendMsgs { label: "execute".to_string(), messages: vec![message], ttl: None, }, ); contract_execute( test_ctx .get_request_builder() .get_request_builder(NEUTRON_CHAIN_NAME), &authorization_contract_address, DEFAULT_KEY, &serde_json::to_string(&send_msg).unwrap(), GAS_FLAGS, ) .unwrap(); std::thread::sleep(std::time::Duration::from_secs(3)); }
In this code snippet, we are sending a message to the Authorization contract to execute the liquid staking message on the base account on Persistence. Note that we are using the same label that we used in the authorization creation. This is important because the Authorization contract will check if the label matches the one in the authorization. If it does not match, the execution will fail. The Authorization contract will send the message to the corresponding Polytone contract that will send it via IBC to the processor on the external domain.
- Tick the processor
#![allow(unused)] fn main() { tick_processor( &mut test_ctx, PERSISTENCE_CHAIN_NAME, DEFAULT_KEY, &processor_on_persistence, ); std::thread::sleep(std::time::Duration::from_secs(3)); }
The message must now be sitting on the processor on Persistence, therefore we need to tick the processor to trigger the execution. This will execute the message and send a callback with the result to the Authorization contract, which completes the full testing cycle.
Example with Program Manager
This example demonstrates how to test your program using the Program Manager after your initial testing set up has been completed as described in the Initial Testing Set Up section.
Use-case: This example outlines the steps needed to create a program that provides and withdraws liquidity from an Osmosis Concentrated Liquidity pool using two library contracts: a CL Liquidity Provider and a CL Liquidity Withdrawer.
Prerequisites
Before proceeding, ensure you have:
- A basic understanding of Osmosis, Neutron, CosmWasm, and Valence
- Completed the initial testing setup as described in the setup section
- Installed all necessary dependencies and have a working development environment
Solution Overview
Full working code for this example can be found in the Osmosis Concentrated Liquidity example.
Our solution includes the following:
- We create three accounts on Osmosis
- CL Input holds tokens ready to join the pool
- CL Output holds the position of the pool
- Final Output holds tokens after they've been withdrawn from the pool
- We instantiate the Concentrated Liquidity Provider and Concentrated Liquidity Withdrawer libraries on Osmosis
- The Liquidity Provider library will draw tokens from the CL Input account and use them to enter the pool
- The Liquidity Withdrawer library will exit the pool from the position held in the CL Output account and deposit redeemed tokens to the Final Output account
- We add two permissionless authorizations on Neutron:
- Provide Liquidity: When executed, it'll call the provide liquidity function
- Withdraw Liquidity: When executed, it'll call the withdraw liquidity function
The following is a visual representation of the system we are building:
graph TD; subgraph Osmosis A1((CL Input)) A2((CL Output)) A3((Final Output)) L1[Liquidity Provider] L2[Liquidity Withdrawer] EP[Processor] end subgraph Neutron A[Authorizations] MP[Processor] end A1 --> L1 --> A2 A2 --> L2 --> A3 User --Execute Msg--> A --Enqueue Batch --> EP EP --> L1 EP --> L2
Code walkthrough
Before we begin, we set up the TestContext
as explained in the previous setup section. Then we can move on to steps pertinent to testing this example.
1. Setting up the program
1.1 Set up the Concentrated Liquidity pool on Osmosis
#![allow(unused)] fn main() { let ntrn_on_osmo_denom = test_ctx .get_ibc_denom() .base_denom(NEUTRON_CHAIN_DENOM.to_owned()) .src(NEUTRON_CHAIN_NAME) .dest(OSMOSIS_CHAIN_NAME) .get(); let pool_id = setup_cl_pool(&mut test_ctx, &ntrn_on_osmo_denom, OSMOSIS_CHAIN_DENOM)?; }
This sets up a CL pool on Osmosis using NTRN and OSMO as the trading pair. Because NTRN on Osmosis will be transferred over IBC, a helper function is used to get the correct denom on Osmosis.
1.2 Set up the Program config builder and prepare the relevant accounts
The Program Manager uses a builder pattern to construct the program configuration. We set up the three accounts that will be used in the liquidity provision and withdrawal flow.
#![allow(unused)] fn main() { let mut builder = ProgramConfigBuilder::new(NEUTRON_CHAIN_ADMIN_ADDR.to_string()); let osmo_domain = Domain::CosmosCosmwasm(OSMOSIS_CHAIN_NAME.to_string()); let ntrn_domain = Domain::CosmosCosmwasm(NEUTRON_CHAIN_NAME.to_string()); // Create account information for LP input, LP output and final (LW) output accounts let cl_input_acc_info = AccountInfo::new("cl_input".to_string(), &osmo_domain, AccountType::default()); let cl_output_acc_info = AccountInfo::new("cl_output".to_string(), &osmo_domain, AccountType::default()); let final_output_acc_info = AccountInfo::new("final_output".to_string(), &osmo_domain, AccountType::default()); // Add accounts to builder let cl_input_acc = builder.add_account(cl_input_acc_info); let cl_output_acc = builder.add_account(cl_output_acc_info); let final_output_acc = builder.add_account(final_output_acc_info); }
1.3 Configure the libraries
Next we configure the libraries for providing and withdrawing liquidity. Each library is configured with input and output accounts and specific parameters for their operation.
Note how cl_output_acc
serves a different purpose for each of those libraries:
- for liquidity provider library it is the output account
- for liquidity withdrawer library it is the input account
#![allow(unused)] fn main() { // Configure Liquidity Provider library let cl_lper_config = LibraryConfig::ValenceOsmosisClLper({ input_addr: cl_input_acc.clone(), output_addr: cl_output_acc.clone(), lp_config: LiquidityProviderConfig { pool_id: pool_id.into(), pool_asset_1: ntrn_on_osmo_denom.to_string(), pool_asset_2: OSMOSIS_CHAIN_DENOM.to_string(), global_tick_range: TickRange { lower_tick: Int64::from(-1_000_000), upper_tick: Int64::from(1_000_000), }, }, }); // Configure Liquidity Withdrawer library let cl_lwer_config = LibraryConfig::ValenceOsmosisClWithdrawer({ input_addr: cl_output_acc.clone(), output_addr: final_output_acc.clone(), pool_id: pool_id.into(), }); // Add libraries to builder let cl_lper_library = builder.add_library(LibraryInfo::new( "test_cl_lper".to_string(), &osmo_domain, cl_lper_config, )); let cl_lwer_library = builder.add_library(LibraryInfo::new( "test_cl_lwer".to_string(), &osmo_domain, cl_lwer_config, )); }
1.4 Create links between accounts and libraries
Input links (first array in the add_link()
call) are meant to enable libraries permission to execute on the specified accounts. Output links specify where the fungible results of a given function execution should be routed to.
#![allow(unused)] fn main() { // Link input account -> liquidity provider -> output account builder.add_link(&cl_lper_library, vec![&cl_input_acc], vec![&cl_output_acc]); // Link output account -> liquidity withdrawer -> final output account builder.add_link(&cl_lwer_library, vec![&cl_output_acc], vec![&final_output_acc]); }
1.5 Create authorizations
Next we create authorizations for both providing and withdrawing liquidity. Each authorization contains a subroutine that specifies which function to call on which library. By default, calling these subroutines will be permissionless, however using the AuthorizationBuilder
we can constrain the authorizations as necessary.
#![allow(unused)] fn main() { builder.add_authorization( AuthorizationBuilder::new() .with_label("provide_liquidity") .with_subroutine( AtomicSubroutineBuilder::new() .with_function(cl_lper_function) .build(), ) .build(), ); builder.add_authorization( AuthorizationBuilder::new() .with_label("withdraw_liquidity") .with_subroutine( AtomicSubroutineBuilder::new() .with_function(cl_lwer_function) .build(), ) .build(), ); }
1.6 Set up the Polytone connections
In order for cross-domain Programs to be able to communicate between different domains, we instantiate the Polytone contracts and save the configuration in our Program Manager.
setup_polytone
sets up the connection between two domains and therefore expects the following parameters:
- source and destination chain names
- source and destination chain ids
- source and destination chain native denoms
#![allow(unused)] fn main() { // prior to initializing the manager, we do the middleware plumbing setup_polytone( &mut test_ctx, NEUTRON_CHAIN_NAME, OSMOSIS_CHAIN_NAME, NEUTRON_CHAIN_ID, OSMOSIS_CHAIN_ID, NEUTRON_CHAIN_DENOM, OSMOSIS_CHAIN_DENOM, )?; }
1.7 Initialize the program
Calling builder.build()
here acts as a snapshot of the existing builder state.
That state is then passed on to the use_manager_init()
call, which consumes it and builds the final program configuration before initializing it.
#![allow(unused)] fn main() { let mut program_config = builder.build(); use_manager_init(&mut program_config)?; }
Congratulations! The program is now initialized across the two chains!
2. Executing the Program
After the initialization, we are ready to start processing messages. For a message to be executed, it first needs to be enqueued to the processor.
2.1 Providing Liquidity
If there are tokens available in the CL Input account, we are ready to provide liquidity. To enqueue provide liquidity message:
#![allow(unused)] fn main() { // build the processor message for providing liquidity let lp_message = ProcessorMessage::CosmwasmExecuteMsg { msg: Binary::from(serde_json::to_vec( &valence_library_utils::msg::ExecuteMsg::<_, ()>::ProcessFunction( valence_osmosis_cl_lper::msg::FunctionMsgs::ProvideLiquidityDefault { bucket_amount: Uint64::new(10), }, ), )?), }; // wrap the processor message in an authorization module call let provide_liquidity_msg = valence_authorization_utils::msg::ExecuteMsg::PermissionlessAction( valence_authorization_utils::msg::PermissionlessMsg::SendMsgs { label: "provide_liquidity".to_string(), messages: vec![lp_message], ttl: None, }, ); contract_execute( test_ctx .get_request_builder() .get_request_builder(NEUTRON_CHAIN_NAME), &authorization_contract_address, DEFAULT_KEY, &serde_json::to_string(&provide_liquidity_msg)?, GAS_FLAGS, )?; }
Now anyone can tick the processor to execute the message. After receiving a tick
, the processor will execute the message at the head of the queue and send a callback to the Authorization contract with the result.
#![allow(unused)] fn main() { contract_execute( test_ctx .get_request_builder() .get_request_builder(OSMOSIS_CHAIN_NAME), &osmo_processor_contract_address, DEFAULT_KEY, &serde_json::to_string( &valence_processor_utils::msg::ExecuteMsg::PermissionlessAction( valence_processor_utils::msg::PermissionlessMsg::Tick {}, ), )?, &format!( "--gas=auto --gas-adjustment=3.0 --fees {}{}", 5_000_000, OSMOSIS_CHAIN_DENOM ), )?; }
2.2 Withdraw Liquidity
To enqueue withdraw liquidity message:
#![allow(unused)] fn main() { // build the processor message for withdrawing liquidity let lw_message = ProcessorMessage::CosmwasmExecuteMsg { msg: Binary::from(serde_json::to_vec( &valence_library_utils::msg::ExecuteMsg::<_, ()>::ProcessFunction( valence_osmosis_cl_withdrawer::msg::FunctionMsgs::WithdrawLiquidity { position_id: output_acc_cl_position.position_id.into(), liquidity_amount: Some(liquidity_amount), }, ), )?), }; // wrap the processor message in an authorization module call let withdraw_liquidity_msg = valence_authorization_utils::msg::ExecuteMsg::PermissionlessAction( valence_authorization_utils::msg::PermissionlessMsg::SendMsgs { label: "withdraw_liquidity".to_string(), messages: vec![lw_message], ttl: None, }, ); contract_execute( test_ctx .get_request_builder() .get_request_builder(NEUTRON_CHAIN_NAME), &authorization_contract_address, DEFAULT_KEY, &serde_json::to_string(&withdraw_liquidity_msg)?, GAS_FLAGS, )?; }
The above enqueues the message to withdraw liquidity. The processor will execute it next time it is ticked.
#![allow(unused)] fn main() { contract_execute( test_ctx .get_request_builder() .get_request_builder(OSMOSIS_CHAIN_NAME), &osmo_processor_contract_address, DEFAULT_KEY, &serde_json::to_string( &valence_processor_utils::msg::ExecuteMsg::PermissionlessAction( valence_processor_utils::msg::PermissionlessMsg::Tick {}, ), )?, &format!( "--gas=auto --gas-adjustment=3.0 --fees {}{}", 5_000_000, OSMOSIS_CHAIN_DENOM ), )?; }
This concludes the walkthrough. You have now initialized the program and used it to provide and withdraw liquidity on Osmosis from Neutron!
Security
Valence Programs have been independently audited. Please find audit reports here.
If you believe you've found a security-related issue with Valence Programs, please disclose responsibly by contacting the Timewave team at security@timewave.computer.
Deployment
This section contains a detailed explanation of how to deploy programs on different environments.
Environments
Local interchain deployment
In order to test a program locally, we use the local interchaintest suite to spin up chains.
1. Installing local-interchain
Before you can run the tests, you need to install local interchain. This is a one-time operation. NOTE: your binary will link back to the location where you install, if you remove the folder, you need to run make install
again.
git clone https://github.com/strangelove-ventures/interchaintest && cd interchaintest/local-interchain && make install
2. Running chains locally
Run one of the set-up configs we have in the e2e/chains
folder. For example, to run the neutron.json
config, run the following command from the e2e
folder:
cd e2e
local-ic start neutron --api-port 42069
This will start a local environment with a Gaia chain and a Neutron (using ICS) chain. The --api-port
will expose the API on port 42069, we are using this port in our local-ic-utils crate so let's use the same to reuse some of the utils there.
This process also writes the API endpoints of each chain to e2e/chains/configs/logs.json
. The setup script will use this file to determine which RPCs to use.
3. Optimize Contracts
From the root directory, use CosmWasm optimizer to optimize contracts. The output will be written to an artifacts
folder in the project root.
just optimize
Or
./devtools/optimize.sh
4. Generate manager config
Before deploying a program, some initial setup is required. The below script will deploy all required contracts to the chain, instantiate a registry contract, and set up polytone bridges.
cargo run -p generate_local_ic_config
The script will write all related code IDs, addresses, and RPC endpoints at deployment/configs/local/config.toml
, to be used by the program manager.
The default chain config that is used in this script is the neutron.json
config, if in step 2 you started local-ic with a different chain config, please use the same config here.
Example with neutron_juno.json
chain config:
cargo run -p generate_local_ic_config -- -c neutron_juno.json
5. Build program config
Before deploying a program, we need to build the program config.
This script will take the program you build using the Program Builder in my_program.rs
and output the program config in a JSON format in output_program/program.json
.
cargo run -p build_program
- This script is a helper to generate a program config in JSON format using our program builder in rust, a program config in JSON format can be generated in any other method.
6. Deploy a program
To deploy a program, you can use the deploy_program
script.
To run this script you need a manager config and a program config:
- Manager config is generated by
generate_local_ic_config
script for local environment atdeployment/configs/local/config.toml
- Program config can be generated using the
build_program
script or any other method atdeployment/output_program/program.json
cargo run -p deploy_program
By default it will look for the program config generated by the build_program
script in deployment/output_program/program.json
. You can pass a different path to the config with:
cargo run -p deploy_program -- -p path/to/program_config.json
7. Program Instantiated
After a program was instantiated successfully, you will see a success message in the console and the program config file path that was generated.
The name of the file will end with the program id, for example: program_1.json
.
You will be able to find this file under the deployment/results
folder.