Introduction

🚧 Valence Protocol architecture and developer documentation is still evolving rapidly. Portions of the toolchain have stabilized to build cross-chain vaults, and extending vaults with multi-party agreements. Send us a message on X if you'd like to get started!

Valence is a unified development environment that enables building trust-minimized cross-chain DeFi applications, called Valence Programs.

Valence Programs are:

  • Easy to understand and quick to deploy: a program can be set up with a configuration file and no code.
  • Extensible: if we don't yet support a DeFi integration out of the box, new integrations can be written in a matter of hours!

Example Use Case:

A DeFi protocol wants to bridge tokens to another chain and deposit them into a vault. After a certain date, it wants to unwind the position. While the position is active, it may also want to delegate the right to change vault parameters to a designated committee so long as the parameters are within a certain range. Without Valence Programs, the protocol would have two choices:

  1. Give the tokens to a multisig to execute actions on the protocol's behalf
  2. Write custom smart contracts and deploy them across multiple chains to handle the cross-chain token operations.

Valence Programs offer the DeFi protocol a third choice: rapidly configure and deploy a secure solution that meets its needs without trusting a multisig or writing complex smart contracts.

Valence Programs

There are two ways to execute Valence Programs.

  1. On-chain Execution: Valence currently supports CosmWasm and EVM. SVM support coming soon. The rest of this section provides a high-level breakdown of the components that comprise a Valence Program using on-chain coprocessors.

  2. Off-chain Execution via ZK Coprocessor: Early specifications exist for the [Valence ZK coprocessor] (/zk-coprocessor/_overview.md). We aim to move as much computation off-chain as possible since off-chain computation is a more scalable approach to building a cross-chain execution environment.

Unless explicitly mentioned, you may assume that documentation and examples in the remaining sections are written with on-chain execution in mind.

Domains

A domain is an environment in which the components that form a program (more on these later) can be instantiated (deployed).

Domains are defined by three properties:

  1. The chain: the blockchain's name e.g. Neutron, Osmosis, Ethereum mainnet.
  2. The execution environment: the environment under which programs (typically smart contracts) can be executed on that particular chain e.g. CosmWasm, EVM, SVM.
  3. The type of bridge used from the main domain to other domains e.g. Polytone over IBC, Hyperlane.

Within a particular ecosystem of blockchains (e.g. Cosmos), the Valence Protocol usually defines one specific domain as the main domain, on which some supporting infrastructure components are deployed. Think of it as the home base supporting the execution and operations of Valence Programs. This will be further clarified in the Authorizations & Processors section.

Below is a simplified representation of a program transferring tokens from a given input account on the Neutron domain, a CosmWasm-enabled smart contract platform secured by the Cosmos Hub, to a specified output account on the Osmosis domain, a well-known DeFi platform in the Cosmos ecosystem.

---
title: Valence Cross-Domain Program
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  subgraph Neutron
  IA
  end
  subgraph Osmosis
  IA -- Transfer tokens --> OA
  end

Accounts

Valence Programs usually perform operations on tokens accross multiple domains. To ensure that the funds remain safe throughout a program's execution, Valence Programs rely on a primitive called Valence Accounts.

A Valence Account is an escrow contract that can hold balances for various supported token types (e.g., in Cosmos ics-20 or cw-20) and ensure that only a restricted set of operations can be performed on the held tokens. Valence Accounts are created (i.e., instantiated) on a specific domain and bound to a specific Valence Program. Valence Programs will typically use multiple accounts during the program's lifecycle for different purposes. Valence Accounts are generic by nature; their use in forming a program is entirely up to the program's creator.

Using a simple token swap program as an example: the program receives an amount of Token A in an input account and will swap these Token A for Token B using a DEX on the same domain (e.g., Neutron). After the swap operation, the received amount of Token B will be temporarily held in a transfer account before being transfered to a final output account on another domain (e.g., Osmosis).

For this, the program will create the following accounts:

  • A Valence Account is created on the Neutron domain to act as the Input account.
  • A Valence Account is created on the Neutron domain to act as the Transfer account.
  • A Valence Account is created on the Osmosis domain to act as the Output account.
---
title: Valence Token Swap Program
---
graph LR
  IA((Input
    Account))
  TA((Transfer
    Account))
  OA((Output
	Account))
  DEX
  subgraph Neutron
  IA -- Swap Token A --> DEX
  DEX -- Token B --> TA
  end
  subgraph Osmosis
  TA -- Transfer token B --> OA
  end

Note: this is a simplified representation.

Valence Accounts do not perform any operation by themselves on the held funds. The operations are performed by Valence Libraries.

Libraries and Functions

Valence Libraries contain the business logic that can be applied to the funds held by Valence Accounts. Most often, this logic is about performing operations on tokens, such as splitting, routing, or providing liquidity on a DEX. A Valence Account has to first approve (authorize) a Valence Library for it to perform operations on that account's balances. Valence Libraries expose Functions that it supports. Valence Programs can be composed of a more or less complex graph of Valence Accounts and Valence Libraries to form a more or less sophisticated cross-chain workflow. During the course of a Valence Program's execution, Functions are called by external parties that trigger the library's operations on the linked accounts.

A typical pattern for a Valence Library is to have one (or more) input account(s) and one (or more) output account(s). While many libraries implement this pattern, it is by no means a requirement.

Valence Libraries play a critical role in integrating Valence Programs with existing decentralized apps and services that can be found in many blockchain ecosystems (e.g., DEXes, liquid staking, etc.).

Now that we know accounts cannot perform any operations by themselves, we need to revisit the token swap program example (mentioned on the Accounts page) and bring Valence Libraries into the picture: the program receives an amount of Token A in an input account, and a Token Swap library exposes a swap function that, when called, will perform a swap operation of Token A held by the input account for Token B using a DEX on the same domain (e.g., Neutron), and transfer them to the transfer account. A Token Transfer library that exposes a transfer function will transfer the Token B amount (when the function is called) to a final output account on another domain (e.g. Osmosis). In this scenario, the DEX is an existing service found on the host domain (e.g. Astroport on Neutron), so it is not part of the Valence Protocol.

The program is then composed of the following accounts & libraries:

  • A Valence Account is created on the Neutron domain to act as the input account.
  • A Valence Account is created on the Neutron domain to act as the transfer account.
  • A Token Swap Valence Library is created on the Neutron domain, authorized by the input account (to be able to act on the held Token A balance), and configured with the input account and transfer account as the respective input and output for the swap operation.
  • A Token Transfer Valence Library is created on the Neutron domain, authorized by the transfer account (to be able to act on the held Token B balance), and configured with the transfer account and output account as the respective input and output for the swap operation.
  • A Valence Account is created on the Osmosis domain to act as the output account.
---
title: Valence Token Swap Program
---
graph LR
  FC[[Function call]]
  IA((Input
	Account))
  TA((Transfer
	Account))
  OA((Output
	Account))
  TS((Token
  	Swap Library))
  TT((Token
  	Transfer Library))
  DEX
  subgraph Neutron
  FC -- 1/Swap --> TS
  TS -- Swap Token A --> IA
  IA -- Token A --> DEX
  DEX -- Token B --> TA
  FC -- 2/Transfer --> TT
  TT -- Transfer Token B --> TA
  end
  subgraph Osmosis
  TA -- Token B --> OA
  end

This example highlights the crucial role that Valence Libraries play for integrating Valence Programs with pre-existing decentralized apps and services.

However, one thing remains unclear in this example: how are Functions called? This is where Programs and Authorizations come into the picture.

Programs and Authorizations

A Valence Program is an instance of the Valence Protocol. It is a particular arrangement and configuration of accounts and libraries across multiple domains (e.g., a POL (protocol-owned liquidity) lending relationship between two parties). Similarly to how a library exposes executable functions, programs are associated with a set of executable Subroutines.

A Subroutine is a vector of Functions. A Subroutine can call out to one or more Function(s) from a single library, or from different libraries. A Subroutine is limited to one execution domain (i.e., Subroutines cannot use functions from libraries instantiated on multiple domains).

A Subroutine can be:

  • Non Atomic (e.g., Execute function one. If that succeeds, execute function two. If that succeeds, execute function three. And so on.)
  • or Atomic (e.g., execute function one, function two, and function three. If any of them fail, then revert all steps.)

Valence Programs are typically used to implement complex cross-chain workflows that perform financial operations in a trust-minimized way. Because multiple parties may be involved in a Valence Program, the parties to a Valence Program may wish for limitations on what various parties are authorized to do.

To specify fine-grained controls over who can initiate the execution of a Subroutine, program creators use the Authorizations module.

The Authorizations module is a powerful and flexible system that supports access control configuration schemes, such as:

  • Anyone can initiate execution of a Subroutine
  • Only permissioned actors can initiate execution of a Subroutine
  • Execution can only be initiated after a starting timestamp/block height
  • Execution can only be initiated up to a certain timestamp/block height
  • Authorizations are tokenized, which means they can be transferred by the holder or used in more sophisticated DeFi scenarios
  • Authorizations can expire
  • Authorizations can be enabled/disabled
  • Authorizations can tightly constrain parameters (e.g., an authorization to execute a token transfer message can limit the execution to only supply the amount argument, not the denom or receiver in the transfer message)

To support the on-chain execution of Valence Programs, the Valence Protocol provides two important contracts: the Authorizations Contract and the Processor Contract.

The Authorizations Contract is the entry point for users. The user sends a set of messages to the Authorizations Contract and the label (id) of the authorization they want to execute. The Authorizations Contract then verifies that the sender is authorized and that the messages are valid, constructs a MessageBatch based on the subroutine, and passes this batch to the Processor Contract for execution. The authority to execute any Subroutine is tokenized so that these tokens can be transferred on-chain.

The Processor Contract receives a MessageBatch and executes the contained Messages in sequence. It does this by maintaining execution queues where the queue items are Subroutines. The processor exposes a Tick message that allows anyone to trigger the processor, whereby the first batch of the queue is executed or moved to the back of the queue if it's not executable yet (e.g., retry period has not passed).

graph LR;
	User --> |Subroutine| Auth(Authorizations)
	Auth --> |Message Batch| P(Processor)
	P --> |Function 1| S1[Library 1]
	P --> |Function 2| S2[Library 2]
	P --> |Function N| S3[Library N]

WIP: Middleware

The Valence Middleware is a set of components that provide a unified interface for the Valence Type system.

At its core, middleware is made up from the following components.

Design goals

TODO: describe modifiable middleware, design goals and philosophy behind it

These means are achieved with three key components:

  • brokers
  • type registries
  • Valence types

Middleware Brokers

Middleware brokers are responsible for managing the lifecycle of middleware instances and their associated types.

Middleware Type Registries

Middleware Type Registries are responsible for unifying a set of foreign types to be used in Valence Programs.

Valence Types

Valence Types are the canonical representations of various external domain implementations of some types.

Valence ZK coprocessor

⚠️ Note: Valence's ZK coprocessor is currently in specification stage and evolving rapidly. This document is shared to give partners a preview of our roadmap in the spirit of building in public.

The Valence ZK coprocessor is a universal DeFi execution engine. It allows developers to compose programs once and deploy them across multiple blockchains. Additionally, the coprocessor facilitates execution of arbitrary cross-chain messages with a focus on synchronizing state between domains. Using Valence, developers can:

  1. Build once, deploy everywhere. Write programs in Rust and settle on one or more EVM, Wasm, Move, or SVM chains.
  2. Avoid introducing additional trust assumptions. Only trust the consensus of the underlying chains you are building on.

While the actual execution is straightforward, the challenge lies in encoding state. The ZK program, as a pure function, must be able to to utilize existing state as arguments to produce an evaluated output state.

Initially, we can develop an efficient version of this coprocessor approximately at par with creating the state encoder. However, it is crucial to note that each chain will necessitate a separate encoder implementation. The initial version will necessitate users to deploy their custom verification keys along with the state mutation function within the target blockchain. Although the code required for this purpose will be minimal, users will still need to implement their own verification keys and state mutation functions.

Longer term, we plan to develop a decoder that will automate the state mutation process based on the output of the ZK commitment. For this initial version, users will be able to perform raw mutations directly, as the correctness of ZK proofs will ensure the validity of messages according to the implemented ZK circuit.

---
title: ZK coprocessor overview
---
graph TB;
    %% Programs
    subgraph ZK coprocessor
        P1[zk program 1]
        P2[zk program 2]
        P3[zk program 3]
    end

    %% Chains
    C1[chain 1]
    C2[chain 2]
    C3[chain 3]

    P1 <--> C1
    P2 <--> C2
    P3 <--> C3

zkVM Primer

A zero-knowledge virtual machine (zkVM) is a zero-knowledge proof system that allows developers to prove the execution of arbitrary programs. In our case these programs are written in Rust. Given a Rust program that can be described as a pure functionf(x) = y, one can prove the evaluation in the following way:

  1. Define f using normal Rust code and compile the function as an executable binary
  2. With this executable binary, set up a proving key pk and verifying key vk
  3. Generate a proof p that f was evaluated correctly given input x using the zkVM, by calling prove(pk, x)
  4. Now you can verify this proof p by calling verify(vk, x, y, p)

Building the Valence ZK coprocessor

Let's assume that we have Valence Accounts in each domain. These accounts implement a kv store.

Every ZK computation will follow the format of a pure state transition function; specifically, we input a state A, apply the function f to it, and produce the resulting state B : f(A) = B . For the function f, the chosen zkVM will generate a verifying key K, which remains consistent across all state transition functions.

Encoding the account state: Unary Encoder

To ensure every state transition computed as a ZK proof by the coprocessor is a pure state transition function, we require a method to encode the entire account's state into initial and mutated forms, A and B, respectively, for use in providing the applicable state modifications for the target chain.

In essence, let's consider an account with its state containing a map that assigns a balance (u64 value) to each key. A contract execution transferring 100 tokens from key m to n can be achieved by invoking state.transfer(signature, m, n, 100). This on-chain transfer function may look something like this:

#![allow(unused)]
fn main() {
fn transfer(&mut self, signature: Signature, from: Address, to: Address, value: u64) {
    assert!(signature.verify(&from));
    assert!(value > 0);

    let balance_from = self.get(&from).unwrap();
    let balance_to = self.get(&from).unwrap_or(0);

    let balance_from = balance_from.checked_sub(value).unwrap();
    let balance_to = balance_to.checked_add(value).unwrap();

    self.insert(from, balance_from);
    self.insert(to, balance_to);
}
}

Here, the pre-transfer state is A and after the transfer, the state is B.

Let's write a new function called transfer_trusted that leaves signature verification to the ZK coprocessor.

#![allow(unused)]
fn main() {
fn transfer_trusted(&mut self, from: Address, to: Address, value: u64) {
    let balance_from = self.get(&from).unwrap();
    let balance_to = self.get(&to).unwrap_or(0);

    self.insert(from, balance_from - value);
    self.insert(to, balance_to + value);
}
}

In the ZK setting, we execute the transfer function within the zkVM. We must input the encoded state the account and receive as output the encoded state of the mutated account.

#![allow(unused)]
fn main() {
fn program(mut state: State, encoder: Encoder, arguments: Arguments) -> Commitment {
    let (signature, from, to, value) = arguments;
    let initial = encoder.commitment(state);

    state.transfer(signature, from, to, value);

	let arguments = encoder.commitment(arugment)
    let finalized = encoder.commitment(state)
    let output = encoder.commitment(initial, arguments, finalized) 

    encoder.commitment(initial, arguments, output)
}
}

Running this program within the zkVM, also allows us to generate a Proof.

Upon receiving the (Proof, Commitment, Arguments) data on the target chain, it can validate the execution correctness by verifying the proof and commitments, leveraging the ZK property that the proof will be valid if, and only if, the contract's execution was accurate for the given inputs, and the supplied commitments are those generated specifically for this proof.

#![allow(unused)]
fn main() {
fn verify(&self, proof: Proof, arguments: Arguments) {
    let current = self.state.commitment();
    let args = arguments.commitment();
    let (from, to, value) = arguments;

    self.transfer_trusted(from, to, value);

    let mutated = self.state.commitment();
    let commitment = (current, args, mutated).commitment();

    proof.verify(&self.vk, commitment);
}
}

By doing so, we switch from on-chain signature verification to computation over committed arguments, followed by ZK proof verification. Although we've presented a simplified example, the same verification process can accommodate any computation supported by a zkVM, enabling us to process multiple transfers in batches, perform intricate computation, and succinctly verify execution correctness. We refer to this pattern as a "Unary Encoder" because we compress the two states of the account, 'current' and 'mutated', into a single zero-knowledge proof.

The Unary Encoder will be the responsible for compressing any chain account state into a compatible commitment for the chosen zkVM (in our case a RISC-V zkVM). The encoding is a one-way function that allows anyone in possession of the pre-image, i.e. inputs to the encoding function, to reconstruct the commitment. This commitment will be transparent to the target chain, enabling use in construction of the block header for verification purposes.

Handling state transition dependencies across domains: Merkelized Encoder

Lets assume a hypothetical situation where we aim to achieve decoupled state updates across three distinct chains: chain 1, chain 2, and chain 3. The objective is to generate a unified ZK proof that verifies the correctness of the state transitions on all chains.

Specifically, chain 3 will depend on a mutation from chain 1, while chain 2 operates independently of the mutations on both chain 1 and chain 3.

graph TB
    %% Root node
    r[R]
    
    %% Level 1
    m1[M1] --> r
    m2[M2] --> r
    
    %% Level 2
    c1[C1] --> m1
    c2[C2] --> m1
    c3[C3] --> m2
    zero((0)) --> m2
    
    %% Level 3
    chain1["(S1 --> T1), K1"] -- chain 1 transition encoding --> c1
    chain2["(S2 --> T2), K2"] -- chain 2 transition encoding --> c2
    chain3["(S3 --> T3), K3"] -- chain 3 transition encoding --> c3

The Merkle Graph above depicts the state transition that can be compressed into a single commitment via Merkelization. Given an encoder with a specialized argument—a Sparse Merkle tree containing encoded state transition values indexed by the program's view key on the target blockchain—we obtain a Merkle Root denoted as R.

The ZK coprocessor can execute proof computations either sequentially or in parallel. The parallel computation associated with C2 operates independently and generates a unary proof of S2 -> T2. Conversely, the proof for C3 requires querying T1.

Since chain 3 has a sequential execution, the coprocessor will first process C1, then relay the pre-image of T1 to the coprocessor responsible for computing C3. Due to the deterministic nature of unary encoding, the chain 3 coprocessor can easily derive T1 and validate its foreign state while concurrently processing C3.

At this point, there is no justification given for Merkelizing the produced proofs, hashing the entire set of Merkle arguments would work as well. However, it's worth noting that chain 2 does not require knowledge of the data (S1, T1, K1, S3, T3, K3). Including such information in the verification arguments of chain 3 would unnecessarily burden its proving process. A Merkle tree is employed here for its logarithmic verification property: the condensed proof generated for chain 2 will only require a Merkle opening to R, without requiring excess state data from other chains. Essentially, when generating the Merkelized proof, the chain 2 coprocessor, after computing C2, will need only C1 and M2, rather than all Merkle arguments.

Finally, each chain will receive R, accompanied by its individual state transition arguments, and the Merkle path leading to R will be proven inside of the circuit.

---
title: On-chain Proof Verification
---
graph TD;
	coprocessor --(R1, T1)--> chain1
	coprocessor --(R2, T2)--> chain2
	coprocessor --(R3, T3, R1, T1, C2)--> chain3

In this diagram we see chain 3 will first verify(R3, T3), then verify(R1, T1), then it will query(T1), then compute C1 := encoding(S1, T1), then compute C3 := encoding(S3, T3), and finally will assert R == H(H(C1, C2), H(C3, 0)).

Sparse Merkle tree

A sparse Merkle tree (SMT) is a specialized version of a Merkle tree, characterized by a leaf index defined by an injective function derived from a predefined argument at the design level. The verification key of a ZK circuit is another constant, also injective to the circuit's definition, and can serve as an index for the available programs.

In the context of a ZK proof being a product of its verification key (alongside other attributes), it allows us to index a proof from a collection of proofs for distinct programs.

Assuming that we don't reuse the same proof for different purposes during a state transition, as the program will either be raw or recursed, the verifying key is an unique index in such collection.

This document describes a sparse Merkle tree design that employs indexing proofs based on the hash of the verification key.

Merkle tree

A Merkle tree is typically a (binary) tree structure consisting of leaves and nodes. Each node in this tree represents the cryptographic hash of its children, while the leaves hold an arbitrary piece of data—usually the hash value of some variable input.

For a hash function H, if we insert the data items A, B, C into a Merkle tree, the resulting structure would look like:

graph TB
    %% Root node
    r["R := H(t10, t11)"]
    
    %% Level 1
    m1["t10 := H(t00, t01)"] --> r
    m2["t11 := H(t02, t03)"] --> r
    
    %% Level 2
    c1["t00 := H(A)"] --> m1
    c2["t01 := H(B)"] --> m1
    c3["t02 := H(C)"] --> m2
    c4["t03 := 0"] --> m2

Membership proof

A Merkle tree serves as an efficient data structure for validating the membership of a leaf node within a set in logarithmic time, making it especially useful for handling large sets. A Merkle opening (or Merkle proof) represents an array of sibling nodes that outline a Merkle Path leading to a commitment Root. If the verifier possesses the root and employs a cryptographic hash function, the pre-image of the hash is non-malleable; in a cryptographic hash, it's unfeasible to discover a set of siblings resulting in the root, except for the valid inputs. Given that the leaf node is known to the verifier, a Merkle Proof will consist of a sequence of hashes leading up to the root. This allows the verifier to compute the root value and compare it with the known Merkle root, thereby confirming the membership of any provided alleged member without relying on the trustworthiness of the source. Consequently, a single hash commitment ensures that any verifier can securely validate the membership of any proposed member supplied by an untrusted party.

On the example above, the Merkle opening for C is the siblings of the path until the root, that is: [t03, t10] . The verifier, that knows R beforehand, will compute:

  1. t02 := H(C)
  2. t11 := H(t02, t03)
  3. R' := H(t10, t11)

If R == R', then C is a member of the set.

Note that the depth of the tree is the length of its Merkle opening, that is: we open up to a node with depth equal to the length of the proof.

Sparse Data

Let's consider a public function f that accepts a member and returns a tuple. This tuple consists of the index within the tree as a u64 value, and the hash of the leaf: (i, h) = f(X).

For the example above, let’s assume two members:

  • (3, a) := f(A)
  • (1, b) := f(B)
graph TB
    %% Root node
    r["R := H(t10, t11)"]
    
    %% Level 1
    m1["t10 := H(t00, t01)"] --> r
    m2["t11 := H(t02, t03)"] --> r
    
    %% Level 2
    c1["t00 := 0"] --> m1
    c2["t01 := b"] --> m1
    c3["t02 := 0"] --> m2
    c4["t03 := a"] --> m2

The primary distinction of a sparse Merkle tree lies in the deterministic leaf index, making it agnostic to input order. In essence, this structure forms an unordered set whose equivalence remains consistent irrespective of the sequence in which items are appended.

The behavior of the membership proof in this context mirrors that in a traditional Merkle tree, except that a sparse Merkle tree enables the generation of a non-membership proof. To achieve this, we carry out a Merkle opening at the specified target index, and expect it to be 0.

Let’s assume a non-member X to be (0, x) := f(X) . To prove non-membership, we broadcast [b, t11] . To verify the non-membership of X, knowing R and the non-membership proof, we:

  1. (0, x) := f(X)
  2. t10 := H(0, b) ; here we open to 0
  3. R’ := H(t10, t11)

If R == R' , then 0 is at the slot of X . Since we know X to not be the pre-image of 0 in H, then X is not a member of the tree.

Valence SMT

Within the scope of Valence, the sparse Merkle tree is designed to utilize the hash of the verifying key generated by the ZK circuit as its index. The tree's leaf data will encompass the proof and input arguments for the ZK program. In this particular implementation, we can consider the input arguments as a generic type, which will be specifically defined during development. These input arguments will constitute the key-value pairs that define a subset of the contract state essential for state transition. The proof will be a vector of bytes.

The tree depth will be adaptive, representing the smallest feasible value required to traverse from the leaf nodes up to the root, given the number of elements involved. This approach ensures we avoid unnecessary utilization of nodes containing unused entries.

For instance, if the tree contains two adjacent nodes indexed at [(0,0), (0,1)], the Merkle opening proof will have a single element—specifically the sibling leaf of the validated node.

In case the tree comprises two nodes with indices [(0,0), (0,2)], the Merkle opening will require two elements, allowing for a complete traversal from the leaves to the root.

Precomputed empty subtrees

Every Merkle tree implementation should include a pre-computed set of empty subtrees, based on the selected hash primitive. To avoid unnecessary computational expenditure, it is more efficient to pre-compute the roots of subtrees consisting solely of zeroed leaves. For instance, all the nodes of the following Merkle tree are constant values for H :

graph TB
    %% Root node
    r["R := H(t10, t11)"]
    
    %% Level 1
    m1["t10 := H(t00, t01)"] --> r
    m2["t11 := H(t02, t03)"] --> r
    
    %% Level 2
    c1["t00 := 0"] --> m1
    c2["t01 := 0"] --> m1
    c3["t02 := 0"] --> m2
    c4["t03 := 0"] --> m2

Let’s assume we have a long path on a sparse Merkle tree with a single leaf X with index 2:

graph TB
    %% Root
    r["R := H(t20, K2)"]
    
    %% Level 1
    t20["t20 := H(K1, t11)"] --> r
    t21["K2"] --> r
    
    %% Level 2
    m1["K1"] --> t20
    m2["t11 := H(X, K0)"] --> t20
    
    %% Level 3
    c3["X"] --> m2
    c4["K0"] --> m2

It would be a waste to compute (K0, K1, K2) here as they are, respectively, K0 := H(0), K1 := H(K0, K0), K2 := (K1, K1). In other words, they are constant values that should be available and should never have to hit the database backend in order to have their values fetched, nor should they exist as a data node. Whenever the tree queries for a node that doesn't exist on the data backend, it should return the constant precomputed empty subtree for that depth.

Normally, the trees will support precomputed values up to a certain depth. If we adopt a 16 bits output hash function, we should have 16 precomputed empty subtrees.

Future upgrades

We don't expect the MVP to be optimized. That is, we should have a working implementation, but not yet optimized to specific use-case.

  • Hash: In the context of sparse Merkle trees, the MVP could employ a widely-accepted cryptographic hash function as its fundamental building block. For example, the Keccak256, which is native to EVM, could be used due to its broad availability. However, utilizing this hash function may lead to an extensive gap between nodes, potentially resulting in a tree structure with only 2 leaves yet a significant depth, as the hashes of the two verifying keys might be exceptionally far apart. A future improvement would be to choose of a cryptographic hash that keeps the leaf nodes close. One cheap method to achieve this is by taking the initial n bits (e.g., 16) of the hash output and using it as an index, given that any secure cryptographic hash maintains its collision resistance and avalanche effect characteristics across the target security level with the selected number of sampled bits. While we anticipate not dealing with a large number of program, (i.e. a 256-bit number, 16 bits should be more than sufficient for this purpose.
  • Data backend: In typical scenarios, the number of nodes in a proof batch shouldn't be large: 8 bits should suffice to represent the number of programs; for very complex and large batches, 16 bits should suffice. Choosing a database backend for a Merkle tree can be challenging because it involves deciding on storage methodologies and optimizing database seek operations to retrieve nodes from the same path on a single page when possible. However, with a limited number of nodes, a streamlined database backend could suffice, delivering requested nodes without regard for the total page count. Given this performance constraint, we should prioritize compatibility over optimization: the ability to use the same backend across multiple blockchain clients and execution environments is more crucial than fine-tuning something that functions well only under specific conditions.

Authorizations & Processors

The Authorizations and Processor contracts are foundational pieces of the Valence Protocol, as they enable on-chain (and cross-chain) execution of Valence Programs and enforce access control to the program's Subroutines via Authorizations.

This section explains the rationale for these contracts and shares insights into their technical implementation, as well as how end-users can interact with Valence Programs via Authorizations.

Rationale

  • To have a general purpose set of smart contracts that provide users with a single point of entry to interact with the Valence Program, which can have libraries and accounts deployed on multiple chains.
  • To have all the user authorizations for multiple domains in a single place, making it very easy to control the application.
  • To have a single address (Processor) that will execute the messages for all the contracts in a domain using execution queues.
  • To only tick a single contract (Processor) that will go through the queues to route and execute the messages.
  • To create, edit, or remove different application permissions with ease.

Technical deep-dive:

Assumptions

  • Funds: You cannot send funds with the messages.

  • Bridging: We are assuming that messages can be sent and confirmed bidirectionally between domains. The authorization contract on the main domain communicates with the processor in a different domain in one direction and the callback confirming the correct or failed execution in the other direction.

  • Instantiation: All these contracts can be instantiated beforehand and off-chain having predictable addresses. Here is an example instantiation flow using Polytone:

    • Predict authorization contract address.
    • Instantiate polytone contracts & set up relayers.
    • Predict proxy contract address for the authorization contract on each external domain.
    • Predict proxy contract address on the main domain for each processor on external domains.
    • Instantiate all processors. The sender on external domains will be the predicted proxy and on the main domain it will be the authorization contract iself.
    • Instantiate authorization contract with all the processors and their predicted proxies for external domains and the processor on the main domain.
  • Relaying: Relayers will be running once everything is instantiated.

  • Tokenfactory: The main domain has the token factory module with no token creation fee so that we can create and mint these nonfungible tokens with no additional cost.

  • Domains: In the current version, actions in each authorization will be limited to a single domain.

Processor

The Processor will be a contract on each domain of our workflow. It handles the execution queues which contain Message Batches. The Processor can be ticked permissionlessly, which will execute the next Message Batch in the queue if this one is executable or rotate it to the back of the queue if it isn't executable yet. The processor will also handle the Retry logic for each batch (if the batch is atomic) or function (if the batch is non atomic). After a Message Batch has been executed successfully or it reached the maximum amount of retries, it will be removed from the execution queue and the Processor will send a callback with the execution information to the Authorization contract.

The processors will be instantiated in advance with the correct address that can send messages to them, according to the InstantiationFlow described in the Assumptions section.

The Authorization contract will be the only address allowed to add list of functions to the execution queues. It will also be allowed to Pause/Resume the Processor or to arbitrarily remove functions from the queues or add certain messages at a specific position.

There will be two execution queues: one High and one Med. This will allow giving different priorities to Message.

Execution

When a processor is Ticked we will take the first MessageBatch from the queue (High if there are batches there or Med if there aren’t). After taking them, we will execute them in different ways depending if the batch is Atomic or NonAtomic.

  • For Atomic batches, the Processor will execute them by sending them to itself and trying to execute them in a Fire and Forget manner. If this execution fails, we will check the RetryLogic of the batch to decide if they are to be re-queued or not (if not, we will send a callback with Rejected status to the authorization contract). If they succeeded we will send a callback with Executed status to the Authorization contract.
  • For NonAtomic batches, we will execute the functions one by one and applying the RetryLogic individually to each function if they fail. NonAtomic functions might also be confirmed via CallbackConfirmations in which case we will keep them in a separate Map until we receive that specific callback. Each time a function is confirmed, we will re-queue the batch and keep track of what function we have to execute next. If at some point a function uses up all its retries, we will send a callback to the Authorization contract with a PartiallyExecuted(num_of_functions_executed) status. If all of them succeed it will be Executed and if none of them were it will be Rejected. For NonAtomic batches, we need to tick the processor each time the batch is at the top of the queue to continue, so we will need at least as many ticks as number of functions we have in the batch, and each function has to wait for its turn.

Storage

The Processor will receive batches of messages from the authorization contract and will enqueue them in a custom storage structure we designed for this purpose, called a QueueMap. This structure is a FIFO queue with owner privileges (allows the owner to insert or remove from any position in the queue). Each “item” stored in the queue is an object MessageBatch that looks like this:

#![allow(unused)]
fn main() {
pub struct MessageBatch {
    pub id: u64,
    pub msgs: Vec<ProcessorMessage>,
    pub subroutine: Subroutine,
    pub priority: Priority,
    pub retry: Option<CurrentRetry>,
}
}
  • id: represents the global id of the batch. The Authorization contract, to understand the callbacks that it will receive from each processor, identifies each batch with an id. This id is unique for the entire application.
  • msgs: the messages the processor needs to execute for this batch (e.g. a CosmWasm ExecuteMsg or MigrateMsg).
  • subroutine: This is the config that the authorization table defines for the execution of these functions. With this field we can know if the functions need to be executed atomically or not atomically, for example, and the retry logic for each batch/function depending on the config type.
  • priority (for internal use): batches will be queued in different priority queues when they are received from the authorization contract. We also keep this priority here because they might need to be re-queued after a failed execution and we need to know where to re-queue them.
  • retry (for internal use): we are keeping the current retry we are at (if the execution previously failed) to know when to abort if we exceed the max retry amounts.

Authorization

The authorization contract will be a single contract deployed on the main domain and that will define the authorizations of the top-level application, which can include libraries in different domains (chains). For each domain, there will be one Processor (with its corresponding execution queues). The Authorization contract will connect to all of the Processors using a connector (e.g. Polytone, Hyperlane…) and will route the Message Batches to be executed to the right domain. At the same time, for each external domain, we will have a proxy contract in the main domain which will receive the callbacks sent from the processor on the external domain with the ExecutionResult of the Message Batch.

The contract will be instantiated once at the very beginning and will be used during the entire top-level application lifetime. Users will never interact with the individual Smart Contracts of each workflow, but with the Authorization contract directly.

Instantiation

When the contract is instantiated, it will be provided the following information:

  • Processor contract on main domain.

  • [(Domain, Connector(Polytone_note_contract), Processor_contract_on_domain, callback_proxy, IBC_Timeout_settings)]: If it's a cross domain application, an array will be passed with each external domain label and its corresponding connector contracts and proxies that will be instantiated before hand. For each connector, there will be also a proxy corresponding to that external domain because it’s a two-way communication flow and we need to receive callbacks. Additionally, we need a set of Timeout settings for the bridge, to know for how long the messages sent through the connector are going to be valid.

  • Admin of the contract (if different to sender).

The instantiation will set up all the processors on each domain so that we can start instantiating the libraries afterwards and providing the correct Processor addresses to each of them depending on which domain they are in.

Owner Functions

  • create_authorizations(vec[Authorization]): provides an authorization list which is the core information of the authorization contract, it will include all the possible set of functions that can be executed. It will contain the following information:

    • Label: unique name of the authorization. This label will be used to identify the authorization and will be used as subdenom of the tokenfactory token in case it is permissioned. Due to tokenfactory module restrictions, the max length of this field is 44 characters. Example: If the label is withdraw and only address neutron123 is allowed to execute this authorization, we will create the token factory/<contract_addr>/withdraw and mint one to that address. If withdraw was permissionless, there is no need for any token, so it's not created.

    • Mode: can either be Permissioned or Permissionless. If Permissionless is chosen, any address can execute this function list. In case of Permissioned, we will also say what type of permissioned type we want (with CallLimit or without), a list of addresses will be provided for both cases. In case there is a CallLimit we will mint a certain amount of tokens for each address that is passed, in case there isn’t we will only mint one token and that token will be used all the time.

    • NotBefore: from what time the authorization can be executed. We can specify a block height or a timestamp.

    • Expiration: until when (what block or timestamp) this authorization is valid.

    • MaxConcurrentExecutions (default 1): to avoid DDoS attacks and to clog the execution queues, we will allow certain authorizations subroutines to be present a maximum amount of times (default 1 unless overwritten) in the execution queue.

    • Subroutine: set of functions in a specific order to be executed. Subroutines can be of two types: Atomic or NonAtomic. For the Atomic subroutines, we will provide an array of Atomic functions and an optional RetryLogic for the entire subroutine. For the NonAtomic subroutines we will just provide an array of NonAtomic functions.

      • AtomicFunction: each Atomic function has the following parameters:

        • Domain of execution (must be the same for all functions in v1).

        • MessageDetails: type (e.g. CosmWasmExecuteMsg) and message (name of the message in the ExecuteMsg json that can be executed with, if applied, three list of parameters: one for MustBeIncluded, one for CannotBeIncluded and one for MustBeValue. (This gives more control over the authorizations. Example: we want one authorization to provide the message with parameters (admin function for that service) but another authorization for the message without any Parameters (user function for that service).

        • Contract address that will execute it.

      • NonAtomicFunction: each NonAtomic function has the following parameters:

        • Domain of execution

        • MessageDetails (like above).

        • Contract address that will execute it.

        • RetryLogic (optional, self-explanatory).

        • CallbackConfirmation (optional): This defines if a NonAtomicFunction is completed after receiving a callback (Binary) from a specific address instead of after a correct execution. This is used in case of the correct message execution not being enough to consider the message completed, so it will define what callback we should receive from a specific address to flag that message as completed. For this, the processor will append an execution_id to the message which will be also passed in the callback by the service to identify what function this callback is for.

    • Priority (default Med): priority of a set of functions can be set to High. If this is the case, they will go into a preferential execution queue. Messages in the High priority queue will be taken over messages in the Med priority queue. All authorizations will have an initial state of Enabled .

    Here is an example of an Authorization table after its creation:

    Authorization Table

  • add_external_domains([external_domains]): if we want to add external domains after instantiation.

  • modify_authorization(label, updated_values): can modify certain updatable fields of the authorization: start_time, expiration, max_concurrent_executions and priority.

  • disable_authorization(label): puts an Authorization to state Disabled. These authorizations can not be run anymore.

  • enable_authorization(label): puts an Authorization to state Enabled so that they can be run again.

  • mint_authorization(label, vec[(addresses, Optional: amounts)]): if the authorization is Permissioned with CallLimit: true, this function will mint the corresponding token amounts of that authorization to the addresses provided. If CallLimit: false it will mint 1 token to the new addresses provided.

  • pause_processor(domain): pause the processor of the domain.

  • resume_processor(domain): resume the processor of the domain.

  • insert_messages(label, queue_position, queue_type, vec[ProcessorMessage]): adds these set of messages to the queue at a specific position in the queue.

  • evict_messages(label, queue_position, queue_type): remove the set of messages from the specific position in a queue.

  • add_sub_owners(vec[addresses]): add the current addresses as 2nd tier owners. These sub_owners can do everything except adding/removing admins.

  • remove_sub_owners(vec[addresses]): remove these addresses from the sub_owner list.

User Actions

  • send_msgs(label, vec[ProcessorMessage]): users can run an authorization with a specific label. If the authorization is Permissioned, the authorization contract will check if they are allowed to execute it by checking that the user has the token in its wallet if it's Permissioned (without limit) or that the user sent the token along with the messages if it's Permissioned (with limit). Along with the authorization label, the user will provide an array of encoded messages, together with the message type (e.g. CosmwasmExecuteMsg) and any other parameters for that specific ProcessorMessage (e.g. for a CosmwasmMigrateMsg we need to also pass a code_id). The contract will then check that the messages match the ones defined in the authorization (and in the correct order) and that all Parameters restrictions, if applied, are correct.

    If all checks are correct, the contract will route the messages to the correct Processor with an execution_id for the processor to callback with. This execution_id is unique for the entire application. If the execution of all the actions are confirmed via a callback, we will burn the token and if they fail, we will send the token back. Here is an example flowchart of how a user interacts with the authorization contract to execute messages in a service sitting on a domain:

User flowchart

Callbacks

There are different types of callbacks in our application. Each of them have a specific function and are used in different parts of the application.

Function Callbacks

For the execution of NonAtomic batches, each function in the batch can optionally be confirmed with a callback from a specific address. When the processor reaches a function that requires a callback, it will inject the execution_id of the batch into the message that is going to be executed on the library, which means that the library needs to be ready to receive that execution_id and know what the expected callback is and from where it has to come from to confirm that function, otherwise that function will stay unconfirmed and the batch will not move to the next function. The callback will be sent to the processor with the execution_id so that the processor can know what function is being confirmed. The processor will then validate that the correct callback was received from the correct address.

If the processor receives the expected callback from the correct address, the batch will move to the next function. If it receives a different callback than expected from that address, the execution of that function will be considered failed and it will be retried (if applicable). In any case, a callback must be received to determine if the function was successful or not.

Processor Callbacks

Once a Processor batch is executed or it fails and there are no more retries available, the Processor will send a callback to the Authorizations contract with the execution_id of the batch and the result of the execution. All this information will be stored in the Authorization contract state so the history of all executions can be queried from it. This is how a ProcessorCallback looks like:

#![allow(unused)]
fn main() {
pub struct ProcessorCallbackInfo {
    // Execution ID that the callback was for
    pub execution_id: u64,
    // Who started this operation, used for tokenfactory actions
    pub initiator: OperationInitiator,
    // Address that can send a bridge timeout or success for the message (if applied)
    pub bridge_callback_address: Option<Addr>,
    // Address that will send the callback for the processor
    pub processor_callback_address: Addr,
    // Domain that the callback came from
    pub domain: Domain,
    // Label of the authorization
    pub label: String,
    // Messages that were sent to the processor
    pub messages: Vec<ProcessorMessage>,
    // Optional ttl for re-sending in case of bridged timeouts
    pub ttl: Option<Expiration>,
    // Result of the execution
    pub execution_result: ExecutionResult,
}

pub enum ExecutionResult {
    InProcess,
    // Everthing executed successfully
    Success,
    // Execution was rejected, and the reason
    Rejected(String),
    // Partially executed, for non-atomic function batches
    // Indicates how many functions were executed and the reason the next function was not executed
    PartiallyExecuted(usize, String),
    // Removed by Owner - happens when, from the authorization contract, a remove item from queue is sent
    RemovedByOwner,
    // Timeout - happens when the bridged message times out
    // We'll use a flag to indicate if the timeout is retriable or not
    // true - retriable
    // false - not retriable
    Timeout(bool),
    // Unexpected error that should never happen but we'll store it here if it ever does
    UnexpectedError(String),
}
}

The key information from here is the label, to identify the authorization that was executed; the messages, to identify what the user sent; and the execution_result, to know if the execution was successful, partially successful or rejected.

Bridge Callbacks

When messages need to be sent through bridges because we are executing batches on external domains, we need to know if, for example, a timeout happened and keep track of it. For this reason we have callbacks per bridge that we support and specific logic that will be executed if they are received. For Polytone timeouts, we will check if the ttl field has not expired and allow permissionless retries if it's still valid. In case the ttl has expired, we will set the ExecutionResult to timeout and not retriable, then send the authorization token back to the user if the user sent it to execute the authorization.

Libraries

This section contains a detailed description of the various libraries that can be used to rapidly build Valence cross-chain programs.

Valence Protocol libraries:

Astroport LPer library

The Valence Astroport LPer library library allows to provide liquidity into an Astroport Liquidity Pool from an input account and deposit the LP tokens into an output account.

High-level flow

---
title: Astroport Liquidity Provider
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Astroport
      Liquidity
      Provider]
  AP[Astroport
     Pool]
  P -- 1/Provide Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Provide Liquidity --> IA
  IA -- 5/Provide Liquidity
				  [Tokens] --> AP
  AP -- 5'/Transfer LP Tokens --> OA

Functions

FunctionParametersDescription
ProvideDoubleSidedLiquidityexpected_pool_ratio_range: Option<DecimalRange>Provide double-sided liquidity to the pre-configured Astroport Pool from the input account, and deposit the LP tokens into the output account. Abort it the pool ratio is not within the expected_pool_ratio range (if specified).
ProvideSingleSidedLiquidityasset: String
limit: Option<Uint128>
expected_pool_ratio_range: Option<DecimalRange>
Provide single-sided liquidity for the specified asset to the pre-configured Astroport Pool from the input account, and deposit the LP tokens into the output account. Abort it the pool ratio is not within the expected_pool_ratio range (if specified).

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are LPed
    pub input_addr: LibraryAccountType,
    // Account to which the LP tokens are forwarded
    pub output_addr: LibraryAccountType,
    // Pool address
    pub pool_addr: String,
    // LP configuration
    pub lp_config: LiquidityProviderConfig,
}

pub struct LiquidityProviderConfig {
    // Pool type, old Astroport pools use Cw20 lp tokens and new pools use native tokens, so we specify here what kind of token we are going to get.
    // We also provide the PairType structure of the right Astroport version that we are going to use for each scenario
    pub pool_type: PoolType,
    // Denoms of both native assets we are going to provide liquidity for
    pub asset_data: AssetData,
    // Slippage tolerance
    pub slippage_tolerance: Option<Decimal>,
}

#[cw_serde]
pub enum PoolType {
    NativeLpToken(valence_astroport_utils::astroport_native_lp_token::PairType),
    Cw20LpToken(valence_astroport_utils::astroport_cw20_lp_token::PairType),
}


pub struct AssetData {
    pub asset1: String,
    pub asset2: String,
}
}

Astroport Withdrawer library

The Valence Astroport Withdrawer library library allows to withdraw liquidity from an Astroport Liquidity Pool from an input account an deposit the withdrawed tokens into an output account.

High-level flow

---
title: Astroport Liquidity Withdrawal
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Astroport
      Liquidity
      Withdrawal]
  AP[Astroport
     Pool]
  P -- 1/Withdraw Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Withdraw Liquidity --> IA
  IA -- 5/Withdraw Liquidity
				  [LP Tokens] --> AP
  AP -- 5'/Transfer assets --> OA

Functions

FunctionParametersDescription
WithdrawLiquidity-Withdraw liquidity from the configured Astroport Pool from the input account and deposit the withdrawed tokens into the configured output account

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account holding the LP position
    pub input_addr: LibraryAccountType,
    // Account to which the withdrawn assets are forwarded
    pub output_addr: LibraryAccountType,
    // Pool address
    pub pool_addr: String,
    // Liquidity withdrawer configuration
    pub withdrawer_config: LiquidityWithdrawerConfig,
}

pub struct LiquidityWithdrawerConfig {
    // Pool type, old Astroport pools use Cw20 lp tokens and new pools use native tokens, so we specify here what kind of token we are will use.
    // We also provide the PairType structure of the right Astroport version that we are going to use for each scenario
    pub pool_type: PoolType,
}

pub enum PoolType {
    NativeLpToken,
    Cw20LpToken,
}
}

Valence Forwarder library

The Valence Forwarder library allows to continuously forward funds from an input account to an output account, following some time constraints. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

High-level flow

---
title: Forwarder Library
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Forwarder
    Library]
  P -- 1/Forward --> S
  S -- 2/Query balances --> IA
  S -- 3/Do Send funds --> IA
  IA -- 4/Send funds --> OA

Functions

FunctionParametersDescription
Forward-Forward funds from the configured input account to the output account, according to the forwarding configs & constraints.

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are pulled
    pub input_addr: LibraryAccountType,
    // Account to which the funds are sent
    pub output_addr: LibraryAccountType,
    // Forwarding configuration per denom
    pub forwarding_configs: Vec<UncheckedForwardingConfig>,
    // Constraints on forwarding operations
    pub forwarding_constraints: ForwardingConstraints,
}

pub struct UncheckedForwardingConfig {
    // Denom to be forwarded (either native or CW20)
    pub denom: UncheckedDenom,
    // Max amount of tokens to be transferred per Forward operation
    pub max_amount: Uint128,
}

// Time constraints on forwarding operations
pub struct ForwardingConstraints {
    // Minimum interval between 2 successive forward operations,
    // specified either as a number of blocks, or as a time delta.
    min_interval: Option<Duration>,
}
}

Valence Generic IBC Transfer library

The Valence Generic IBC Transfer library allows to transfer funds over IBC from an input account on a source chain to an output account on a destination chain. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

Note: this library should not be used on Neutron, which requires some fees to be paid to relayers for IBC transfers. For Neutron, prefer using the dedicated (and optimized) Neutron IBC Transfer library instead.

High-level flow

---
title: Generic IBC Transfer Library
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Gen IBC Transfer
    Library]
  subgraph Chain 1
  P -- 1/IbcTransfer --> S
  S -- 2/Query balances --> IA
  S -- 3/Do Send funds --> IA
  end
  subgraph Chain 2
  IA -- 4/IBC Transfer --> OA
  end

Functions

FunctionParametersDescription
IbcTransfer-Transfer funds over IBC from an input account on a source chain to an output account on a destination chain.

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
struct LibraryConfig {
  // Account from which the funds are pulled (on the source chain)
  input_addr: LibraryAccountType,
  // Account to which the funds are sent (on the destination chain)
  output_addr: String,
  // Denom of the token to transfer
  denom: UncheckedDenom,
  // Amount to be transferred, either a fixed amount or the whole available balance.
  amount: IbcTransferAmount,
  // Memo to be passed in the IBC transfer message.
  memo: String,
  // Information about the destination chain.
  remote_chain_info: RemoteChainInfo,
  // Denom map for the Packet-Forwarding Middleware, to perform a multi-hop transfer.
  denom_to_pfm_map: BTreeMap<String, PacketForwardMiddlewareConfig>,
}

// Defines the amount to be transferred, either a fixed amount or the whole available balance.
enum IbcTransferAmount {
  // Transfer the full available balance of the input account.
  FullAmount,
  // Transfer the specified amount of tokens.
  FixedAmount(Uint128),
}

pub struct RemoteChainInfo {
  // Channel of the IBC connection to be used.
  channel_id: String,
  // Port of  the IBC connection to be used.
  port_id: Option<String>,
  // Timeout for the IBC transfer.
  ibc_transfer_timeout: Option<Uint64>,
}

// Configuration for a multi-hop transfer using the Packet Forwarding Middleware
struct PacketForwardMiddlewareConfig {
  // Channel ID from the source chain to the intermediate chain
  local_to_hop_chain_channel_id: String,
  // Channel ID from the intermediate to the destination chain
  hop_to_destination_chain_channel_id: String,
  // Temporary receiver address on the intermediate chain
  hop_chain_receiver_address: String,
}
}

Valence Neutron IBC Transfer library

The Valence Neutron IBC Transfer library allows to transfer funds over IBC from an input account on Neutron to an output account on a destination chain. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

Note: this library should not be used on another CosmWasm chain than Neutron, which requires some fees to be paid to relayers for IBC transfers. For other CosmWasm chains, prefer using the Generic IBC Transfer library instead.

High-level flow

---
title: Neutron IBC Transfer Library
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Neutron IBC Transfer
    Library]
  subgraph Neutron
  P -- 1/IbcTransfer --> S
  S -- 2/Query balances --> IA
  S -- 3/Do Send funds --> IA
  end
  subgraph Chain 2
  IA -- 4/IBC Transfer --> OA
  end

Functions

FunctionParametersDescription
IbcTransfer-Transfer funds over IBC from an input account on Neutron to an output account on a destination chain.

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
struct LibraryConfig {
  // Account from which the funds are pulled (on the source chain)
  input_addr: LibraryAccountType,
  // Account to which the funds are sent (on the destination chain)
  output_addr: String,
  // Denom of the token to transfer
  denom: UncheckedDenom,
  // Amount to be transferred, either a fixed amount or the whole available balance.
  amount: IbcTransferAmount,
  // Memo to be passed in the IBC transfer message.
  memo: String,
  // Information about the destination chain.
  remote_chain_info: RemoteChainInfo,
  // Denom map for the Packet-Forwarding Middleware, to perform a multi-hop transfer.
  denom_to_pfm_map: BTreeMap<String, PacketForwardMiddlewareConfig>,
}

// Defines the amount to be transferred, either a fixed amount or the whole available balance.
enum IbcTransferAmount {
  // Transfer the full available balance of the input account.
  FullAmount,
  // Transfer the specified amount of tokens.
  FixedAmount(Uint128),
}

pub struct RemoteChainInfo {
  // Channel of the IBC connection to be used.
  channel_id: String,
  // Port of  the IBC connection to be used.
  port_id: Option<String>,
  // Timeout for the IBC transfer.
  ibc_transfer_timeout: Option<Uint64>,
}

// Configuration for a multi-hop transfer using the Packet Forwarding Middleware
struct PacketForwardMiddlewareConfig {
  // Channel ID from the source chain to the intermediate chain
  local_to_hop_chain_channel_id: String,
  // Channel ID from the intermediate to the destination chain
  hop_to_destination_chain_channel_id: String,
  // Temporary receiver address on the intermediate chain
  hop_chain_receiver_address: String,
}
}

Osmosis CL LPer library

The Valence Osmosis CL LPer library library allows to create concentrated liquidity positions on Osmosis from an input account, and deposit the LP tokens into an output account.

High-level flow

---
title: Osmosis CL Liquidity Provider
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Osmosis CL
      Liquidity
      Provider]
  AP[Osmosis CL
     Pool]
  P -- 1/Provide Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Configure target
    range --> S
  S -- 4/Do Provide Liquidity --> IA
  IA -- 5/Provide Liquidity
				  [Tokens] --> AP
  AP -- 5'/Transfer LP Tokens --> OA

Concentrated Liquidity Position creation

Because of the way CL positions are created, there are two ways to achieve it:

Default

Default position creation centers around the idea of creating a position with respect to the currently active tick of the pool.

This method expects a single parameter, bucket_amount, which describes how many buckets of the pool should be taken into account to both sides of the price curve.

Consider a situation where the current tick is 125, and the configured tick spacing is 10.

If this method is called with bucket_amount set to 5, the following logic will be performed:

  • find the current bucket range, which is 120 to 130
  • extend the current bucket ranges by 5 buckets to both sides, meaning that the range "to the left" will be extended by 5 * 10 = 50, and the range "to the right" will be extended by 5 * 10 = 50, resulting in the covered range from 120 - 50 = 70 to 130 + 50 = 180, giving the position tick range of (70, 180).

Custom

Custom position creation allows for more fine-grained control over the way the position is created.

This approach expects users to specify the following parameters:

  • tick_range, which describes the price range to be covered
  • token_min_amount_0 and token_min_amount_1 which are optional parameters that describe the minimum amount of tokens that should be provided to the pool.

With this flexibility a wide variety of positions can be created, such as those that are entirely single-sided.

Functions

FunctionParametersDescription
ProvideLiquidityDefaultbucket_amount: Uint64Create a position on the pre-configured Osmosis Pool from the input account, following the Default approach described above, and deposit the LP tokens into the output account.
ProvideLiquidityCustomtick_range: TickRange
token_min_amount_0: Option<Uint128>
token_min_amount_1: Option<Uint128>
Create a position on the pre-configured Osmosis Pool from the input account, following the Custom approach described above, and deposit the LP tokens into the output account.

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are LPed
    pub input_addr: LibraryAccountType,
    // Account to which the LP position is forwarded
    pub output_addr: LibraryAccountType,
    // LP configuration
    pub lp_config: LiquidityProviderConfig,
}

pub struct LiquidityProviderConfig {
    // ID of the Osmosis CL pool
    pub pool_id: Uint64,
    // Pool asset 1 
    pub pool_asset_1: String,
    // Pool asset 2
    pub pool_asset_2: String,
    // Pool global price range
    pub global_tick_range: TickRange,
}
}

Osmosis CL liquidity withdrawer library

The Valence Osmosis CL Withdrawer library library allows to withdraw a concentrated liquidity position off an Osmosis pool from an input account, and transfer the resulting tokens to an output account.

High-level flow

---
title: Osmosis CL Liquidity Withdrawal
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Osmosis CL
      Liquidity
      Withdrawal]
  AP[Osmosis CL
     Pool]
  P -- 1/Withdraw Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Withdraw Liquidity --> IA
  IA -- 5/Withdraw Liquidity
				  [LP Position] --> AP
  AP -- 5'/Transfer assets --> OA

Functions

FunctionParametersDescription
WithdrawLiquidityposition_id: Uint64
liquidity_amount: String
Withdraw liquidity from the configured Osmosis Pool from the input account, according to the given parameters, and transfer the withdrawned tokens to the configured output account

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account holding the LP position
    pub input_addr: LibraryAccountType,
    // Account to which the withdrawn assets are forwarded
    pub output_addr: LibraryAccountType,
    // ID of the pool
    pub pool_id: Uint64,
}
}

Osmosis GAMM LPer library

The Valence Osmosis GAMM LPer library library allows to join a pool on Osmosis, using the GAMM module (Generalized Automated Market Maker), from an input account, and deposit the LP tokens into an output account.

High-level flow

---
title: Osmosis GAMM Liquidity Provider
---
graph LR
  IA((Input
      Account))
  OA((Output
          Account))
  P[Processor]
  S[Osmosis GAMM
      Liquidity
      Provider]
  AP[Osmosis
     Pool]
  P -- 1/Join Pool --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Join Pool --> IA
  IA -- 5/Join Pool
                  [Tokens] --> AP
  AP -- 5'/Transfer LP tokens --> OA

Functions

FunctionParametersDescription
ProvideDoubleSidedLiquidityexpected_spot_price: Option<DecimalRange>Provide double-sided liquidity to the pre-configured Osmosis Pool from the input account, and deposit the LP tokens into the output account. Abort it the spot price is not within the expected_spot_price range (if specified).
ProvideSingleSidedLiquidityasset: String
limit: Option<Uint128>
expected_spot_price: Option<DecimalRange>
Provide single-sided liquidity for the specified asset to the pre-configured Osmosis Pool from the input account, and deposit the LP tokens into the output account. Abort it the spot price is not within the expected_spot_price range (if specified).

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are LPed
    pub input_addr: LibraryAccountType,
    // Account to which the LP position is forwarded
    pub output_addr: LibraryAccountType,
    // LP configuration
    pub lp_config: LiquidityProviderConfig,
}

pub struct LiquidityProviderConfig {
    // ID of the Osmosis pool
    pub pool_id: Uint64,
    // Pool asset 1 
    pub pool_asset_1: String,
    // Pool asset 2
    pub pool_asset_2: String,
}
}

Osmosis GAMM liquidity withdrawer library

The Valence Osmosis GAMM Withdrawer library library allows to exit a pool on Osmosis, using the GAMM module (Generalized Automated Market Maker), from an input account, an deposit the withdrawed tokens into an output account.

High-level flow

---
title: Osmosis GAMM Liquidity Withdrawal
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Osmosis GAMM
      Liquidity
      Withdrawal]
  AP[Osmosis
     Pool]
  P -- 1/Withdraw Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Withdraw Liquidity --> IA
  IA -- 5/Withdraw Liquidity
				  [LP Tokens] --> AP
  AP -- 5'/Transfer assets --> OA

Functions

FunctionParametersDescription
WithdrawLiquidity-Withdraw liquidity from the configured Osmosis Pool from the input account and deposit the withdrawed tokens into the configured output account

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are LPed
    pub input_addr: LibraryAccountType,
    // Account to which the LP tokens are forwarded
    pub output_addr: LibraryAccountType,
    // Liquidity withdrawer configuration
    pub withdrawer_config: LiquidityWithdrawerConfig,
}

pub struct LiquidityWithdrawerConfig {
    // ID of the pool
    pub pool_id: Uint64,
}
}

Valence Reverse Splitter library

The Reverse Splitter library allows to route funds from one or more input account(s) to a single output account, for one or more token denom(s) according to the configured ratio(s). It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

High-level flow

---
title: Reverse Splitter Library
---
graph LR
  IA1((Input
      Account1))
  IA2((Input
       Account2))
  OA((Output
		  Account))
  P[Processor]
  S[Reverse Splitter
    Library]
  C[Contract]
  P -- 1/Split --> S
  S -- 2/Query balances --> IA1
  S -- 2'/Query balances --> IA2
  S -. 3/Query split ratio .-> C
  S -- 4/Do Send funds --> IA1
  S -- 4'/Do Send funds --> IA2
  IA1 -- 5/Send funds --> OA
  IA2 -- 5'/Send funds --> OA

Functions

FunctionParametersDescription
Split-Split and route funds from the configured input account(s) to the output account, according to the configured token denom(s) and ratio(s).

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
struct LibraryConfig {
    output_addr: LibraryAccountType,   // Account to which the funds are sent.
    splits: Vec<UncheckedSplitConfig>, // Split configuration per denom.
    base_denom: UncheckedDenom         // Base denom, used with ratios.
}

// Split config for specified account
struct UncheckedSplitConfig {
  denom: UncheckedDenom,                // Denom for this split configuration (either native or CW20).
  account: LibraryAccountType,          // Address of the input account for this split config.
  amount: UncheckedSplitAmount,         // Fixed amount of tokens or an amount defined based on a ratio.
  factor: Option<u64>                   // Multiplier relative to other denoms (only used if a ratio is specified).
}

// Ratio configuration, either fixed & dynamically calculated
enum UncheckedRatioConfig {
  FixedAmount(Uint128), // Fixed amount of tokens
  FixedRatio(Decimal),  // Fixed ratio e.g. 0.0262 for NTRN/STARS (or could be another arbitrary ratio)
  DynamicRatio {        // Dynamic ratio calculation (delegated to external contract)
	contract_addr: "<TWAP Oracle wrapper contract address>",
    params: "base64-encoded arbitrary payload to send in addition to the denoms"
  }
}

// Standard query & response for contract computing a dynamic ratio
// for the Splitter & Reverse Splitter libraries.
#[cw_serde]
#[derive(QueryResponses)]
pub enum DynamicRatioQueryMsg {
    #[returns(DynamicRatioResponse)]
    DynamicRatio {
        denoms: Vec<String>,
        params: String,
    }
}

#[cw_serde]
// Response returned by the external contract for a dynamic ratio
struct DynamicRatioResponse {
    pub denom_ratios: HashMap<String, Decimal>,
}
}

Valence Splitter library

The Valence Splitter library allows to split funds from one input account to one or more output account(s), for one or more token denom(s) according to the configured ratio(s). It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

High-level flow

---
title: Splitter Library
---
graph LR
  IA((Input
      Account))
  OA1((Output
		  Account 1))
	OA2((Output
		  Account 2))
  P[Processor]
  S[Splitter
    Library]
  C[Contract]
  P -- 1/Split --> S
  S -- 2/Query balances --> IA
  S -. 3/Query split ratio .-> C
  S -- 4/Do Send funds --> IA
  IA -- 5/Send funds --> OA1
  IA -- 5'/Send funds --> OA2

Functions

FunctionParametersDescription
Split-Split funds from the configured input account to the output account(s), according to the configured token denom(s) and ratio(s).

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
struct LibraryConfig {
    input_addr: LibraryAccountType,    // Address of the input account
    splits: Vec<UncheckedSplitConfig>, // Split configuration per denom
}

// Split config for specified account
struct UncheckedSplitConfig {
  denom: UncheckedDenom,          // Denom for this split configuration (either native or CW20)
  account: LibraryAccountType,    // Address of the output account for this split config
  amount: UncheckedSplitAmount,   // Fixed amount of tokens or an amount defined based on a ratio
}

// Split amount configuration, either a fixed amount of tokens or an amount defined based on a ratio
enum UncheckedSplitAmount {
  FixedAmount(Uint128),       // Fixed amount of tokens
  FixedRatio(Decimal),        // Fixed ratio e.g. 0.0262 for NTRN/STARS (or could be another arbitrary ratio)
  DynamicRatio {              // Dynamic ratio calculation (delegated to external contract)
    contract_addr: "<TWAP Oracle wrapper contract address>",
    params: "base64-encoded arbitrary payload to send in addition to the denoms"
  }
}

// Standard query & response for contract computing a dynamic ratio
// for the Splitter & Reverse Splitter libraries.
#[cw_serde]
#[derive(QueryResponses)]
pub enum DynamicRatioQueryMsg {
    #[returns(DynamicRatioResponse)]
    DynamicRatio {
        denoms: Vec<String>,
        params: String,
    }
}

#[cw_serde]
// Response returned by the external contract for a dynamic ratio
struct DynamicRatioResponse {
    pub denom_ratios: HashMap<String, Decimal>,
}
}

Neutron Interchain Querier

Neutron Interchain Querier library allows registering and receiving results for KV-based queries. This library wraps around the functionality enabled by the interchainqueries module on Neutron.

Prerequisites

Active Neutron ICQ relayer

This library requires active Neutron ICQ Relayers operating on the specified routes.

Valence Middleware broker

Each KV-based query requires a correctly encoded key in order to be registered. This library obtains the query keys from Valence Middleware brokers, which expose particular type registries.

For a given KV-query to be performed, the underlying type registry must implement IcqIntegration trait which thefore provides the following adapters:

  1. get_kv_key, enabling the ability to get the correct KVKey for query registration
  2. decode_and_reconstruct, allowing to reconstruct the interchain query result

Read more about the given type ICQ integration in the type registry documentation page.

Query registration fee

Neutron interchainqueries module is configured to escrow a fee (denominated in untrn) in order to register a query. The fee parameter is dynamic and can be queried via the interchainqueries module.

Query deregistration

Interchain Query escrow payments can be reclaimed by submitting the RemoveInterchainQuery message. Only the query owner (this contract) is able to submit this message.

Interchain Queries should be removed after they are no longer needed, however, that moment may be different for each Valence Program depending on its configuration.

Background on the interchainqueries module

Query Registration Message types

Interchain queries can be registered and unregistered by submitting the following neutron-sdk messages:

#![allow(unused)]
fn main() {
pub enum NeutronMsg {
	// other variants

	RegisterInterchainQuery {
		/// **query_type** is a query type identifier ('tx' or 'kv' for now).
		query_type: String,

		/// **keys** is the KV-storage keys for which we want to get values from remote chain.
		keys: Vec<KVKey>,

		/// **transactions_filter** is the filter for transaction search ICQ.
		transactions_filter: String,

		/// **connection_id** is an IBC connection identifier between Neutron and remote chain.
		connection_id: String,

		/// **update_period** is used to say how often the query must be updated.
		update_period: u64,
	},
	RemoveInterchainQuery {
    query_id: u64,
	},
}
}

where the KVKey is defined as follows:

#![allow(unused)]
fn main() {
pub struct KVKey {
    /// **path** is a path to the storage (storage prefix) where you want to read value by key (usually name of cosmos-packages module: 'staking', 'bank', etc.)
    pub path: String,

    /// **key** is a key you want to read from the storage
    pub key: Binary,
}
}

This variant applies for both tx- and kv-based queries. Given that we are dealing with kv-based queries, transactions_filter field is irrelevant.

Therefore our query registration message may look like the following:

#![allow(unused)]
fn main() {
    let kv_registration_msg = NeutronMsg::RegisterInterchainQuery {
        query_type: QueryType::KV.into(),
        keys: vec![query_kv_key],
        transactions_filter: String::new(),
        connection_id: "connection-3".to_string(),
        update_period: 5,
    }
}

query_kv_key here is obtained by calling into the associated broker module for a given type and query parameters.

Query Result Message types

After a query is registered and fetched back to Neutron, its results can be queried with the following neutron query:

#![allow(unused)]
fn main() {
pub enum NeutronQuery {
    /// Query a result of registered interchain query on remote chain
    InterchainQueryResult {
        /// **query_id** is an ID registered interchain query
        query_id: u64,
    },
	// other types
}
}

which will return the interchain query result:

#![allow(unused)]
fn main() {
pub struct InterchainQueryResult {
    /// **kv_results** is a raw key-value pairs of query result
    pub kv_results: Vec<StorageValue>,

    /// **height** is a height of remote chain
    pub height: u64,

    #[serde(default)]
    /// **revision** is a revision of remote chain
    pub revision: u64,
}
}

where StorageValue is defined as:

#![allow(unused)]
fn main() {
/// Describes value in the Cosmos-SDK KV-storage on remote chain
pub struct StorageValue {
    /// **storage_prefix** is a path to the storage (storage prefix) where you want to read
    /// value by key (usually name of cosmos-packages module: 'staking', 'bank', etc.)
    pub storage_prefix: String,

    /// **key** is a key under which the **value** is stored in the storage on remote chain
    pub key: Binary,

    /// **value** is a value which is stored under the **key** in the storage on remote chain
    pub value: Binary,
}
}

Query lifecycle

After RegisterInterchainQuery message is submitted, interchainqueries module will deduct the query registration fee from the caller.

At that point the query is assigned its unique query_id identifier, which is not known in advance. This identifier is returned to the caller in the reply.

Once the query is registered, the responsible query relayer performs the following steps:

  1. fetch the specified value from the target domain
  2. post the query result to interchainqueries module
  3. trigger SudoMsg::KVQueryResult endpoint on the contract that registered the query

SudoMsg::KVQueryResult does not carry back the actual query result. Instead, it posts back a query_id of the query which had been performed, announcing that its result is available.

That query_id can then be used to query the interchainqueries module to obtain the raw interchainquery result. These raw results fetched from other cosmos chains will be encoded in protobuf and require additional processing in order to be reasoned about.

Library Functions

At its core, this library should support initiating the interchain queries, receiving their responses, and reclaiming the escrowed fees by unregistering the queries.

In practice, however, these functions are not very useful in a broader Valence Program context by themselves - remote domain KV-Query results arrive back encoded in formats meant for those remote domains.

For most cosmos-sdk based chains, storage values are stored in protobuf. Interpreting protobuf from within cosmwasm context is not straightforward and requires additional steps. Other domains may store their state in other encoding formats. We do not make any assumptions about remote domain encodings in this library - instead, that responsibility is handed over to the middleware.

For that reason, it is likely that this library will take on the additional responsibility of transforming those remote-encoded responses into canonical data formats that will be easily recognized within the Valence Protocol scope. Aforementioned transformation will be performed by making use of Valence Middleware.

After the query response is transformed into its canonical representation, the resulting data type is written into a Storage Account making it available for further processing, interpretation, or other functions.

Library Lifecycle

With the baseline functionality in mind, there are a few design decisions that shape the overall lifecycle of this library.

Instantiation flow

Neutron Interchain Querier is instantiated with the configuration needed to initiate and process the queries that it will be capable of executing.

This involves the following configuration parameters.

Account association

Like other libraries, this querier is going to be associated with an account. Associated Storage accounts will authorize instances of Neutron IC Queriers to post data objects of the canonical Valence types.

Unlike most other libraries, there is no notion of input and output accounts. There is just an account, and it is the only account that this library will be posting data into.

Account association will follow the same logic of approve/revoke as in other libraries.

Query configurations

On instantiation, IC Querier will be configured to perform a set of queries. This configuration will consist of a complete set of parameters needed to register and process the query responses, as well as the outline of how those responses should be processed into Valence Types to then be written under a particular storage slot to a given Storage Account.

Each query definition will contain its unique identifier. This identifier is going to be needed for distinguishing a given query from others during query registration and deregistration.

Execution flow

With Neutron IC Querier instantiated, the library is ready to start carrying out the queries.

Query initiation

Configured queries can be triggered / initiated on-demand, by calling the execute method and specifying the unique query identifier(s).

This will, in turn, submit the query registration message to interchainqueries module and kick off the interchain query flow. After the result is fetched back, library will attempt to decode the response and convert it into a ValenceType which is then to be posted into the associated Storage Account.

Query deregistration

At any point after the query registration, authorized addresses (admin/processor) are permitted to unregister a given query.

This will reclaim the escrow fee and remove the query from interchainqueries active queries list, thus concluding the lifecycle of a given query.

Library in Valence Programs

Neutron IC Querier does not behave as a standard library in that it does not produce any fungible outcome. Instead, it produces a foreign type that gets converted into a Valence Type.

While that result could be posted directly to the state of this library, instead, it is posted to an associated output account meant for storing data. Just as some other libraries have a notion of input accounts that grant them the permission of executing some logic, Neutron IC Querier has a notion of an associated account which grants the querier a permission to writing some data into its storage slots.

For example, consider a situation where this library had queried the balance of some remote account, parsed the response into a Valence Balance type, and wrote that resulting object into its associated storage account. That same associated account may be the input account of some other library, which will attempt to perform its function based on the content written to its input account. This may involve something along the lines of: if balance > 0, do x; otherwise, do y;.

With that, the IC Querier flow in a Valence Program may look like this:

┌────────────┐                   ┌───────────┐
│ Neutron IC │   write Valence   │  storage  │
│  Querier   │──────result──────▶│  account  │
└────────────┘                   └───────────┘

Middleware

This section contains a description of the Valence Protocol middleware design.

Valence Protocol Middleware components:

Middleware Broker

Middleware broker acts as an app-level integration gateway in Valence Programs. Integration here is used rather ambiguously on purpose - brokers should remain agnostic to the primitives being integrated into Valence Protocol. These primitives may involve but not be limited to:

  • data types
  • functions
  • encoding schemes
  • any other distributed system building blocks that may be implemented differently

Problem statement

Valence Programs can be configured to span over multiple domains and last for an indefinite duration of time.

Domains integrated into Valence Protocol are sovereign and evolve on their own.

Middleware brokers provide the means to live with these differences by enabling various primitive conversions to be as seamless as possible. Seamless here primarily refers to causing no downtime to bring a given primitive up-to-date, and making the process of doing so as easy as possible for the developers.

To visualize a rather complex instance of this problem, consider the following situation. A Valence Program is initialized to continuously query a particular type from a remote domain, modify some of its values, and send the altered object back to the remote domain for further actions. At some point during the runtime, remote domain performs an upgrade which extends the given type with additional fields. The Valence Program is unaware of this upgrade and continues with its order of operations. However, the type in question from the perspective of the Valence Program had drifted and is no longer representative of its origin domain.

Among other things, Middleware brokers should enable such programs to gracefully recover into a synchronized state that can continue operating in a correct manner.

Broker Lifecycle

Brokers are singleton components that are instantiated before the program start time.

Valence Programs refer to their brokers of choice by their respective addresses.

This means that the same broker instance for a particular domain could be used across many Valence Programs.

Brokers maintain their set of type registries and index them by semver. New type registries can be added to the broker during runtime. While programs have the freedom to select a particular version of a type registry to be used for a given request, by default, the most up to date type registry is used.

Two aforementioned properties reduce the amount of work needed to upkeep the integrations across active Valence Programs: updating one broker with the latest version of a given domain will immediately become available for all Valence Programs using it.

API

Broker interface is agnostic to the type registries it indexes. A single query is exposed:

#![allow(unused)]
fn main() {
pub struct QueryMsg {
    pub registry_version: Option<String>,
    pub query: RegistryQueryMsg,
}
}

This query message should only change in situations where it may become limiting.

After receiving the query request, broker will relay the contained RegistryQueryMsg to the correct type registry, and return the result to the caller.

Middleware Type Registry

Middleware type registries are static components that define how primitives external to the Valence Protocol are adapted to be used within Valence programs.

While type registries can be used independently, they are typically meant to be registered into and used via brokers to ensure versioning is kept up to date.

Type Registry lifecycle

Type Registries are static contracts that define their primitives during compile time.

Once a registry is deployed, it is expected to remain unchanged. If a type change is needed, a new registry should be compiled, deployed, and registered into the broker to offer the missing or updated functionality.

API

All type registry instances must implement the same interface defined in middleware-utils.

Type registries function in a read-only manner - all of their functionality is exposed with the RegistryQueryMsg. Currently, the following primitive conversions are enabled:

#![allow(unused)]
fn main() {
pub enum RegistryQueryMsg {
    /// serialize a message to binary
    #[returns(NativeTypeWrapper)]
    FromCanonical { obj: ValenceType },
    /// deserialize a message from binary/bytes
    #[returns(Binary)]
    ToCanonical { type_url: String, binary: Binary },

    /// get the kvkey used for registering an interchain query
    #[returns(KVKey)]
    KVKey {
        type_id: String,
        params: BTreeMap<String, Binary>,
    },

    #[returns(NativeTypeWrapper)]
    ReconstructProto {
        type_id: String,
        icq_result: InterchainQueryResult,
    },
}
}

RegistryQueryMsg can be seen as the superset of all primitives that Valence Programs can expect. No particular type being integrated into the system is required to implement all available functionality, although that is possible.

To maintain a unified interface across all type registries, they have to adhere to the same API as all other type registries. This means that if a particular type is enabled in a type registry and only provides the means to perform native <-> canonical conversion, attempting to call ReconstructProto on that type will return an error stating that reconstructing protobuf for this type is not enabled.

Module organization

Primitives defined in type registries should be outlined in a domain-driven manner. Types, encodings, and any other functionality should be grouped by their domain and are expected to be self-contained, not leaking into other primitives.

For instance, an osmosis type registry is expected to contain all registry instances related to the Osmosis domain. Different registry instances should be versioned by semver, following that of the external domain of which the primitives are being integrated.

Enabled primitives

Currently, the following type registry primitives are enabled:

  • Neutron Interchain Query types:
    • reconstructing native types from protobuf
    • obtaining the KVKey used to initiate the query for a given type
  • Valence Canonical Types:
    • reconstructing native types from Valence Types
    • mapping native types into Valence Types

Example integration

For an example, consider the integration of the osmosis gamm pool.

Neutron Interchain Query integration

Neutron Interchain Query integration for a given type is achieved by implementing the IcqIntegration trait:

#![allow(unused)]
fn main() {
pub trait IcqIntegration {
    fn get_kv_key(params: BTreeMap<String, Binary>) -> Result<KVKey, MiddlewareError>;
    fn decode_and_reconstruct(
        query_id: String,
        icq_result: InterchainQueryResult,
    ) -> Result<Binary, MiddlewareError>;
}
}

get_kv_key

Implementing the get_kv_key will provide the means to obtain the KVKey needed to register the interchain query. For osmosis gamm pool, the implementation may look like this:

#![allow(unused)]

fn main() {
impl IcqIntegration for OsmosisXykPool {
    fn get_kv_key(params: BTreeMap<String, Binary>) -> Result<KVKey, MiddlewareError> {
        let pool_prefix_key: u8 = 0x02;

        let id: u64 = try_unpack_domain_specific_value("pool_id", &params)?;

        let mut pool_access_key = vec![pool_prefix_key];
        pool_access_key.extend_from_slice(&id.to_be_bytes());

        Ok(KVKey {
            path: STORAGE_PREFIX.to_string(),
            key: Binary::new(pool_access_key),
        })
    }
}
}

decode_and_reconstruct

Other part of enabling interchain queries is the implementation of decode_and_reconstruct. This method will be called upon ICQ relayer posting the query result back to the interchainqueries module on Neutron. For osmosis gamm pool, the implementation may look like this:

#![allow(unused)]
fn main() {
impl IcqIntegration for OsmosisXykPool {
    fn decode_and_reconstruct(
        _query_id: String,
        icq_result: InterchainQueryResult,
    ) -> Result<Binary, MiddlewareError> {
        let any_msg: Any = Any::decode(icq_result.kv_results[0].value.as_slice())
            .map_err(|e| MiddlewareError::DecodeError(e.to_string()))?;

        let osmo_pool: Pool = any_msg
            .try_into()
            .map_err(|_| StdError::generic_err("failed to parse into pool"))?;

        to_json_binary(&osmo_pool)
            .map_err(StdError::from)
            .map_err(MiddlewareError::Std)
    }
}
}

Valence Type integration

Valence Type integration for a given type is achieved by implementing the ValenceTypeAdapter trait:

#![allow(unused)]
fn main() {
pub trait ValenceTypeAdapter {
    type External;

    fn try_to_canonical(&self) -> Result<ValenceType, MiddlewareError>;
    fn try_from_canonical(canonical: ValenceType) -> Result<Self::External, MiddlewareError>;
}
}

Ideally, Valence Types should represent the minimal amount of information needed and avoid any domain-specific logic or identifiers. In practice, this is a hard problem: native types that are mapped into Valence types may need to be sent back to the remote domains. For that reason, we cannot afford leaking any domain-specific fields and instead store them in the Valence Type itself for later reconstruction.

In case of ValenceXykPool, this storage is kept in its domain_specific_fields field. Any fields that are logically common across all possible integrations into this type should be kept in their dedicated fields. In the case of constant product pools, such fields are the assets in the pool, and the shares issued that represent those assets:

#![allow(unused)]
fn main() {
#[cw_serde]
pub struct ValenceXykPool {
    /// assets in the pool
    pub assets: Vec<Coin>,

    /// total amount of shares issued
    pub total_shares: String,

    /// any other fields that are unique to the external pool type
    /// being represented by this struct
    pub domain_specific_fields: BTreeMap<String, Binary>,
}
}

try_to_canonical

Implementing the try_from_canonical will provide the means of mapping a native remote type into the canonical Valence Type to be used in Valence Protocol. For osmosis gamm pool, the implementation may look like this:

#![allow(unused)]
fn main() {
impl ValenceTypeAdapter for OsmosisXykPool {
    type External = Pool;

    fn try_to_canonical(&self) -> Result<ValenceType, MiddlewareError> {
        // pack all the domain-specific fields
        let mut domain_specific_fields = BTreeMap::from([
            (ADDRESS_KEY.to_string(), to_json_binary(&self.0.address)?),
            (ID_KEY.to_string(), to_json_binary(&self.0.id)?),
            (
                FUTURE_POOL_GOVERNOR_KEY.to_string(),
                to_json_binary(&self.0.future_pool_governor)?,
            ),
            (
                TOTAL_WEIGHT_KEY.to_string(),
                to_json_binary(&self.0.total_weight)?,
            ),
            (
                POOL_PARAMS_KEY.to_string(),
                to_json_binary(&self.0.pool_params)?,
            ),
        ]);

        if let Some(shares) = &self.0.total_shares {
            domain_specific_fields
                .insert(SHARES_DENOM_KEY.to_string(), to_json_binary(&shares.denom)?);
        }

        for asset in &self.0.pool_assets {
            if let Some(token) = &asset.token {
                domain_specific_fields.insert(
                    format!("pool_asset_{}_weight", token.denom),
                    to_json_binary(&asset.weight)?,
                );
            }
        }

        let mut assets = vec![];
        for asset in &self.0.pool_assets {
            if let Some(t) = &asset.token {
                assets.push(coin(u128::from_str(&t.amount)?, t.denom.to_string()));
            }
        }

        let total_shares = self
            .0
            .total_shares
            .clone()
            .map(|shares| shares.amount)
            .unwrap_or_default();

        Ok(ValenceType::XykPool(ValenceXykPool {
            assets,
            total_shares,
            domain_specific_fields,
        }))
    }
}
}

try_from_canonical

Other part of enabling Valence Type integration is the implementation of try_from_canonical. This method will be called when converting from canonical back to the native version of the types. For osmosis gamm pool, the implementation may look like this:

#![allow(unused)]
fn main() {
impl ValenceTypeAdapter for OsmosisXykPool {
    type External = Pool;

    fn try_from_canonical(canonical: ValenceType) -> Result<Self::External, MiddlewareError> {
        let inner = match canonical {
            ValenceType::XykPool(pool) => pool,
            _ => {
                return Err(MiddlewareError::CanonicalConversionError(
                    "canonical inner type mismatch".to_string(),
                ))
            }
        };
        // unpack domain specific fields from inner type
        let address: String = inner.get_domain_specific_field(ADDRESS_KEY)?;
        let id: u64 = inner.get_domain_specific_field(ID_KEY)?;
        let future_pool_governor: String =
            inner.get_domain_specific_field(FUTURE_POOL_GOVERNOR_KEY)?;
        let pool_params: Option<PoolParams> = inner.get_domain_specific_field(POOL_PARAMS_KEY)?;
        let shares_denom: String = inner.get_domain_specific_field(SHARES_DENOM_KEY)?;
        let total_weight: String = inner.get_domain_specific_field(TOTAL_WEIGHT_KEY)?;

        // unpack the pool assets
        let mut pool_assets = vec![];
        for asset in &inner.assets {
            let pool_asset = PoolAsset {
                token: Some(Coin {
                    denom: asset.denom.to_string(),
                    amount: asset.amount.into(),
                }),
                weight: inner
                    .get_domain_specific_field(&format!("pool_asset_{}_weight", asset.denom))?,
            };
            pool_assets.push(pool_asset);
        }

        Ok(Pool {
            address,
            id,
            pool_params,
            future_pool_governor,
            total_shares: Some(Coin {
                denom: shares_denom,
                amount: inner.total_shares,
            }),
            pool_assets,
            total_weight,
        })
    }
}
}

Valence Types

Valence Types are a set of canonical type wrappers to be used inside Valence Programs.

Primary operational domain of Valence Protocol will need to consume, interpret, and otherwise manipulate data from external domains. For that reason, canonical representations of such types are defined in order to form an abstraction layer that all Valence Programs can reason about.

Canonical Type integrations

Canonical types to be used in Valence Programs are enabled by the Valence Protocol.

For instance, consider Astroport XYK and Osmosis GAMM pool types. These are two distinct data types that represent the same underlying concept - a constant product pool.

These types can be unified in the Valence Protocol context by being mapped to and from the following Valence Type definition:

#![allow(unused)]
fn main() {
pub struct ValenceXykPool {
    /// assets in the pool
    pub assets: Vec<Coin>,

    /// total amount of shares issued
    pub total_shares: String,

    /// any other fields that are unique to the external pool type
    /// being represented by this struct
    pub domain_specific_fields: BTreeMap<String, Binary>,
}
}

For a remote type to be integrated into the Valence Protocol means that there are available adapters that map between the canonical and original type definitions.

These adapters can be implemented by following the design outlined by type registries.

Active Valence Types

Active Valence types provide the interface for integrating remote domain representations of the same underlying concepts. Remote types can be integrated into Valence Protocol if and only if there is an enabled Valence Type representing the same underlying primitive.

Currently enabled Valence types are:

  • XYK pool
  • Balance response

Examples

Here are some examples of Valence Programs that you can use to get started.

Token Swap Program

This example demonstrates a simple token swap program whereby two parties wish to exchange specific amounts of (different) tokens they each hold, at a rate they have previously agreed on. The program ensures the swap happens atomically, so neither party can withdraw without completing the trade.

---
title: Valence token swap program
---
graph LR
	InA((Party A Deposit))
	InB((Party B Deposit))
	OutA((Party A Withdraw))
	OutB((Party B Withdraw))
	SSA[Splitter A]
	SSB[Splitter B]
	subgraph Neutron
	InA --> SSA --> OutB
	InB --> SSB --> OutA
	end

The program is composed of the following components:

  • Party A Deposit account: a Valence account which Party A will deposit their tokens into, to be exchanged with Party B's tokens.
  • Splitter A: an instance of the Splitter library that will transfer Party A's tokens from its input account (i.e. the Party A Deposit account) to its output account (i.e. the Party B Withdraw account) upon execution of its split function.
  • Party B Withdraw account: the account from which Party B can withdraw Party A's tokens after the swap has successfully completed. Note: this can be a Valence account, but it could also be a regular chain account, or a smart contract.
  • Party B Deposit account: a Valence account which Party B will deposit their funds into, to be exchanged with Party A's funds.
  • Splitter B: an instance of the Splitter library that will transfer Party B's tokens from its input account (i.e. the Party B Deposit account) to its output account (i.e. the Party A Withdraw account) upon execution of its split function.
  • Party A Withdraw account: the account from which Party A can withdraw Party B's tokens after the swap has successfully completed. Note: this can be a Valence account, but it could also be a regular chain account, or a smart contract.

The way the program is able to fulfil the requirement for an atomic exchange of tokens between the two parties is done by implementing an atomic subroutine composed of two function calls:

  1. Splitter A's split function
  2. Splitter B's split function

The Authorizations component will ensure that either both succeed, or none is executed, thereby ensuring that funds remain safe at all time (either remaining in the respective deposit accounts, or transferred to the respective withdraw accounts).

Crosschain Vaults

Note: This example is still in the design phase and includes new or experimental features of Valence Programs that may not be supported in the current production release.

Overview

You can use Valence Programs to create crosschain vaults. Users interact with a vault on one chain while the tokens are held on another chain where yield is generated.

Note: In our initial implementation we use Neutron for co-processing and Hyperlane for general message passing between the co-processor and the target domain. Deployment of Valence programs as zk RISC-V co-processors with permissionless message passing will be available in the coming months.

In this example, we have made the following assumptions:

  • Users can deposit tokens into a standard ERC-4626 vault on Ethereum.
  • ERC-20 shares are issued to users on Ethereum.
  • If a user wishes to redeem their tokens, they can issue a withdrawal request which will burn the user's shares when tokens are redeemed.
  • The redemption rate that tells us how many tokens can be redeemed per shares is given by: \( R = \frac{TotalAssets}{TotalIssuedShares} = \frac{TotalInVault + TotalInTransit + TotalInPostion}{TotalIssuedShares}\)
  • A permissioned actor called the "Strategist" is authorized to transport funds from Ethereum to Neutron where they are locked in some DeFi protocol. And vice-versa, the Strategist can withdraw from the position so the funds are redeemable on Ethereum. The redemption rate must be adjusted by the Strategist accordingly.
---
title: Crosschain Vaults Overview
---
graph LR
	User
	EV(Ethereum Vault)
	NP(Neutron Position)

	User -- Tokens --> EV
	EV -- Shares --> User
	EV -- Strategist Transport --> NP
	NP -- Strategist Transport --> EV

While we have chosen Ethereum and Neutron as examples here, one could similarly construct such vaults between any two chains as long as they are supported by Valence Programs.

Implementing Crosschain Vaults as a Valence Program

Recall that Valence Programs are comprised of Libraries and Accounts. Libraries are a collection of Functions that perform token oprations on the Accounts. Since there are two chains here, Libraries and Accounts will exist on both chains.

Since gas is cheaper on Neutron than on Ethereum, computationally expensive operations, such as constraining the Strategist actions will be done on Neutron. Authorized messages will then be executed by each chain's Processor. Hyperlane is used to pass messages from the Authorization contract on Neutron to the Processor on Ethereum.

---
title: Program Control
---
graph BT
	Strategist
	subgraph Ethereum
		EP(Processor)
		EHM(Hyperlane Mailbox)
		EL(Ethereum Valence Libraries)
		EVA(Valence Accounts)
	end
	subgraph Neutron
		A(Authorizations)
		NP(Processor)
		EE(EVM Encoder)
		NHM(Hyperlane Mailbox)
		NL(Neutron Valence Libraries)
		NVA(Valence Accounts)
	end

	Strategist --> A
	A --> EE --> NHM --> Relayer --> EHM --> EP --> EL --> EVA
	A --> NP --> NL--> NVA

Libraries and Accounts needed

On Ethereum, we'll need Accounts for:

  • Deposit: To hold user deposited tokens. Tokens from this pool can be then transported to Neutron.
  • Withdraw: To hold tokens received from Neutron. Tokens from this pool can then be redeemed for shares.

On Neutron, we'll need Accounts for:

  • Deposit: To hold tokens bridged from Ethereum. Tokens from this pool can be used to enter into the position on Neutron.
  • Position: Will hold the vouchers or shares associated with the position on Neutron.
  • Withdraw: To hold the tokens that are withdrawn from the position. Tokens from this pool can be bridged back to Ethereum.

We'll need the following Libraries on Ethereum:

  • Bridge Transfer: To transfer funds from the Ethereum Deposit Account to the Neutron Deposit Account.
  • Forwarder: To transfer funds between the Deposit and Withdraw Accounts on Ethereum. Two instances of the Library will be required.

We'll need the following Libraries on Neutron:

  • Position Depositor: To take funds in the Deposit and create a position with them. The position is held by the Position account.
  • Position Withdrawer: To redeem a position for underlying funds that are then transferred to the Withdraw Account on Neutron.
  • Bridge Transfer: To transfer funds from the Neutron Withdraw Account to the Ethereum Withdraw Account.

Note that the Accounts mentioned here the standard Valence Accounts. Th Bridge Transfer library will depend on the token being transferred, but will offer similar functionality to the IBC Transfer library. The Position Depositor and Withdrawer will depend on the type of position, but can be similar to the Liqudity Provider and Liquidity Withdrawer.

Vault Contract

The Vault contract is a special contract on Ethereum that has an ERC-4626 interface.

User methods to deposit funds

  • Deposit: Deposit funds into the registered Deposit Account. Receive shares back based on the redemption rate.
    Deposit {
    	amount: Uint256,
    	receiver: String
    }
    
  • Mint: Mint shares from the vault. Expects the user to provide sufficient tokens to cover the cost of the shares based on the current redemption rate.
    Mint {
    	shares: Uint256,
    	receiver: String
    }
    
---
title: User Deposit and Share Mint Flow
---
graph LR
	User
	subgraph Ethereum
		direction LR
		EV(Vault)
		ED((Deposit))
	end
	
	User -- 1/ Deposit Tokens --> EV
	EV -- 2/ Send Shares --> User
	EV -- 3/ Send Tokens --> ED

User methods to withdraw funds

  • Redeem: Send shares to redeem assets. This creates a WithdrawRecord in a queue. This record is processed at the next Epoch
    Redeem {
    	shares: Uint256,
    	receiver: String,
    	max_loss_bps: u64
    }
    
  • Withdraw: Withdraw amount of assets. It expects the user to have sufficient shares. This creates a WithdrawRecord in a queue. This record is processed at the next Epoch.
    Withdraw {
    	amount: Uint256,
    	receiver: String,
    	max_loss_bps: u64
    }
    

Withdraws are subject to a lockup period after the user has initiated a redemption. During this time the redemption rate may change. Users can specify an acceptable loss in case the the redemption rate decreases using the max_loss_bps parameter.

After the Epoch has completed, a user may complete the withdrawal by executing the following message:

  • CompleteWithdraw: Pop the WithdrawRecord. Pull funds from the Withdraw Account and send to user. Burn the user's deposited shares.
---
title: User Withdraw Flow
---
graph RL
	subgraph Ethereum
		direction RL
		EV(Vault)
		EW((Withdraw))
	end
	EW -- 2/ Send Tokens --> EV -- 3/ Send Tokens --> User
	User -- 1/ Deposit Shares --> EV

Strategist methods to manage the vault

The vault validates that the Processor is making calls to it. On Neutron, the Authorization contract limits the calls to be made only by a trusted Strategist. The Authorization contract can further constrain when or how Strategist actions can be taken.

  • Update: The strategist can update the current redemption rate.
    Update {
      rate: Uint256
    }
    
  • Pause and Unpause: The strategist can pause and unpause vault operations.
    Pause {}
    

Program subroutines

The program authorizes the Strategist to update the redemption rate and transport funds between various Accounts.

Allowing the Strategist to transport funds

---
title: From Ethereum Deposit Account to Neutron Position Account
---
graph LR
	subgraph Ethereum
		ED((Deposit))
		ET(Bridge Transfer)
	end
	subgraph Neutron
		NPH((Position Holder))
		NPD(Position Depositor)
		ND((Deposit))
	end

	ED --> ET --> ND --> NPD --> NPH
---
title: From Neutron Position Account to Ethereum Withdraw Account
---
graph RL
	subgraph Ethereum
		EW((Withdraw))
	end
	subgraph Neutron
		NPH((Position Holder))
		NW((Widthdraw))
		NT(Bridge Transfer)
		NPW(Position Withdrawer)
	end

	NPH --> NPW --> NW --> NT --> EW

---
title: Between Ethereum Deposit and Ethereum Withdraw Accounts
---
graph
	subgraph Ethereum
		ED((Deposit))
		EW((Withdraw))
		FDW(Forwarder)
	end
	ED --> FDW --> EW

Design notes

This is a simplified design to demonstrate how a cross-chain vault can be implemented with Valence Programs. Production deployments will need to consider additional factors not covered here including:

  • Fees for gas, bridging, and for entering/exiting the position on Neutron. It is recommend that the vault impose withdraw fee and platform for users.
  • How to constrain Strategist behavior to ensure they set redemption rates correctly.

Testing your programs

Our testing infrastructure is built on several tools that work together to provide a comprehensive local testing environment:

Core Testing Framework

We use local-interchain, a component of the interchaintest developer toolkit. This allows you to deploy and run chains in a local environment, providing a controlled testing space for your blockchain applications.

Localic Utils

To make these tools more accessible in Rust, we've developed localic-utils. This Rust library provides convenient interfaces to interact with the local-interchain testing framework.

Program Manager

We provide a tool called Program Manager that helps you manage your programs. We've created all the abstractions and helper functions to create your programs more efficiently together with local-interchain.

The Program Manager use is optional, it abstracts a lot of functionality and allows creating programs in much less code. But if you want to have more fine-grained control over your programs, we provide helper functions to create and interact with your programs directly without it. In this section, we'll show you two different examples on how to test your programs, one using the Program Manager and the other without it. There are also many more examples each of them for different use cases. They are all in the examples folder of our local-interchaintest folder.

Initial Testing Set Up

For testing your programs, no matter if you want to use the manager or not, there is a common set up that needs to be done. This set up is necessary to initialize the testing context with all the required information of the local-interchain environment.

1. Setting the TestContext using the TestContextBuilder

The TestContext is the interchain environment in which your program will run. Let's say you want to configure the Neutron chain and Osmosis chain, you may set it up as follows:

#![allow(unused)]
fn main() {
    let mut test_ctx = TestContextBuilder::default()
        .with_unwrap_raw_logs(true)
        .with_api_url(LOCAL_IC_API_URL)
        .with_artifacts_dir(VALENCE_ARTIFACTS_PATH)
        .with_chain(ConfigChainBuilder::default_neutron().build()?)
        .with_chain(ConfigChainBuilder::default_osmosis().build()?)
        .with_log_file_path(LOGS_FILE_PATH)
        .with_transfer_channels(NEUTRON_CHAIN_NAME, OSMOSIS_CHAIN_NAME)
        .build()?;
}

This will instantiate a TestContext with two chains, Neutron and Osmosis, that are connected via IBC by providing the transfer_channels parameter. The api_url is the URL of the local-interchain API, and the artifacts_dir is the path where the compiled programs are stored. The log_file_path is the path where the logs will be stored. The most important part here are the chains, which are created using the ConfigChainBuilder with the default configurations for Neutron and Osmosis and the transfer channels between them. We provide builders for most chains but you can also create your own configurations.

2. Custom chain-specific setup

Some chains require additional setup to interact with others. For example, if you are going to use a liquid staking chain like Persistence, you need to register and activate the host zone to allow liquid staking of its native token. We provide helper functions that do this for you, here's an example:

#![allow(unused)]
fn main() {
    info!("Registering host zone...");
    register_host_zone(
        test_ctx
            .get_request_builder()
            .get_request_builder(PERSISTENCE_CHAIN_NAME),
        NEUTRON_CHAIN_ID,
        &connection_id,
        &channel_id,
        &native_denom,
        DEFAULT_KEY,
    )?;


    info!("Activating host zone...");
    activate_host_zone(NEUTRON_CHAIN_ID)?;
}

Other examples of this would be deploying Astroport contracts, creating Osmosis pools... We provider helper functions for pretty much all of them and we have examples for all of them in the examples folder.

Example without Program Manager

This example demonstrates how to test your program without the Program Manager after your initial testing set up has been completed as described in the Initial Testing Set Up section.

Use-case: In this particular example, we will show you how to create a program that liquid stakes NTRN tokens on a Persistence chain directly from a base account without the need of using libraries. Note that this example is just for demonstrating purposes. In a real-world scenario, you would not liquid stake NTRN as it is not a staking token. We also are not using a liquid staking library for this example, although one could be creating for this purpose.

The full code for this example can be found in the Persistence Liquid Staking example.

  1. Set up the authorization contract and processor on the Main Domain (Neutron).
#![allow(unused)]
fn main() {
    let now = SystemTime::now();
    let salt = hex::encode(
        now.duration_since(SystemTime::UNIX_EPOCH)?
            .as_secs()
            .to_string(),
    );

    let (authorization_contract_address, _) =
        set_up_authorization_and_processor(&mut test_ctx, salt.clone())?;
}

This code sets up the authorization contract and processor on Neutron. We use a time based salt to ensure that each test run the generated contract addresses are different. The set_up_authorization_and_processor function is a helper function instantiates both the processor and authorization contracts on Neutron and provides the contract addresses to interact with both. As you can see, we are not using the processor on Neutron here, but we are still setting it up.

  1. Set up an external domain and create a channel to start relaying messages.
#![allow(unused)]
fn main() {
    let processor_on_persistence = set_up_external_domain_with_polytone(
        &mut test_ctx,
        PERSISTENCE_CHAIN_NAME,
        PERSISTENCE_CHAIN_ID,
        PERSISTENCE_CHAIN_ADMIN_ADDR,
        LOCAL_CODE_ID_CACHE_PATH_PERSISTENCE,
        "neutron-persistence",
        salt,
        &authorization_contract_address,
    )?;
}

This function does the following:

  • Instantiates all the Polytone contracts on both the main domain and the new external domain. The information of the external domain is provided in the function arguments.
  • Creates a channel between the Polytone contracts that the relayer will use to relay messages between the authorization contract and the processor.
  • Instantiates the Processor contract on the external domain with the correct Polytone information and the authorization contract address.
  • Adds the external domain to authorization contract with the Polytone information and the processor address on the external domain.

After this is done, we can start creating authorizations for that external domain and when we send messages to the authorization contract, the relayer will relay the messages to the processor on the external domain and return the callbacks.

  1. Create one or more base accounts on a domain.
#![allow(unused)]
fn main() {
    let base_accounts = create_base_accounts(
        &mut test_ctx,
        DEFAULT_KEY,
        PERSISTENCE_CHAIN_NAME,
        base_account_code_id,
        PERSISTENCE_CHAIN_ADMIN_ADDR.to_string(),
        vec![processor_on_persistence.clone()],
        1,
        None,
    );
    let persistence_base_account = base_accounts.first().unwrap();
}

This function creates a base account on the external domain and grants permission to the processor address to execute messages on its behalf. If we were using a library instead, we would be granting permission to the library contract instead of the processor address in the array provided.

  1. Create the authorization
#![allow(unused)]
fn main() {
    let authorizations = vec![AuthorizationBuilder::new()
        .with_label("execute")
        .with_subroutine(
            AtomicSubroutineBuilder::new()
                .with_function(
                    AtomicFunctionBuilder::new()
                        .with_domain(Domain::External(PERSISTENCE_CHAIN_NAME.to_string()))
                        .with_contract_address(LibraryAccountType::Addr(
                            persistence_base_account.clone(),
                        ))
                        .with_message_details(MessageDetails {
                            message_type: MessageType::CosmwasmExecuteMsg,
                            message: Message {
                                name: "execute_msg".to_string(),
                                params_restrictions: None,
                            },
                        })
                        .build(),
                )
                .build(),
        )
        .build()];

    info!("Creating execute authorization...");
    let create_authorization = valence_authorization_utils::msg::ExecuteMsg::PermissionedAction(
        valence_authorization_utils::msg::PermissionedMsg::CreateAuthorizations { authorizations },
    );

    contract_execute(
        test_ctx
            .get_request_builder()
            .get_request_builder(NEUTRON_CHAIN_NAME),
        &authorization_contract_address,
        DEFAULT_KEY,
        &serde_json::to_string(&create_authorization).unwrap(),
        GAS_FLAGS,
    )
    .unwrap();
    std::thread::sleep(std::time::Duration::from_secs(3));
    info!("Execute authorization created!");
}

In this code snippet, we are creating an authorization to execute a message on the persistence base account. For this particular example, since we are going to execute a CosmosMsg::Stargate directly on the account passing the protobuf message, we are not going to set up any param restrictions. If we were using a library, we could potentially set up restrictions for the json message that the library would expect.

  1. Send message to the authorization contract
#![allow(unused)]
fn main() {
info!("Send the messages to the authorization contract...");

    let msg_liquid_stake = MsgLiquidStake {
        amount: Some(Coin {
            denom: neutron_on_persistence.clone(),
            amount: amount_to_liquid_stake.to_string(),
        }),
        delegator_address: persistence_base_account.clone(),
    };
    #[allow(deprecated)]
    let liquid_staking_message = CosmosMsg::Stargate {
        type_url: msg_liquid_stake.to_any().type_url,
        value: Binary::from(msg_liquid_stake.to_proto_bytes()),
    };

    let binary = Binary::from(
        serde_json::to_vec(&valence_account_utils::msg::ExecuteMsg::ExecuteMsg {
            msgs: vec![liquid_staking_message],
        })
        .unwrap(),
    );
    let message = ProcessorMessage::CosmwasmExecuteMsg { msg: binary };
    let send_msg = valence_authorization_utils::msg::ExecuteMsg::PermissionlessAction(
        valence_authorization_utils::msg::PermissionlessMsg::SendMsgs {
            label: "execute".to_string(),
            messages: vec![message],
            ttl: None,
        },
    );

    contract_execute(
        test_ctx
            .get_request_builder()
            .get_request_builder(NEUTRON_CHAIN_NAME),
        &authorization_contract_address,
        DEFAULT_KEY,
        &serde_json::to_string(&send_msg).unwrap(),
        GAS_FLAGS,
    )
    .unwrap();
    std::thread::sleep(std::time::Duration::from_secs(3));
}

In this code snippet, we are sending a message to the authorization contract to execute the liquid staking message on the base account on Persistence. Note that we are using the same label that we used in the authorization creation. This is important because the authorization contract will check if the label matches the one in the authorization. If it does not match, the execution will fail. The authorization contract will send the message to the corresponding Polytone contract that will send it via IBC to the processor on the external domain.

  1. Tick the processor
#![allow(unused)]
fn main() {
    tick_processor(
        &mut test_ctx,
        PERSISTENCE_CHAIN_NAME,
        DEFAULT_KEY,
        &processor_on_persistence,
    );
    std::thread::sleep(std::time::Duration::from_secs(3));
}

The message must now be sitting on the processor on Persistence, therefore we need to tick the processor to trigger the execution. This will execute the message and send a callback with the result to the authorization contract, which completes the full testing cycle.

Example with Program Manager

This example demonstrates how to test your program using the Program Manager after your initial testing set up has been completed as described in the Initial Testing Set Up section.

Use-case: This example outlines the steps needed to create a program that provides and withdraws liquidity from an Osmosis Concentrated Liquidity pool using two library contracts: a CL Liquidity Provider and a CL Liquidity Withdrawer.

Prerequisites

Before proceeding, ensure you have:

  • A basic understanding of Osmosis, Neutron, CosmWasm, and Valence
  • Completed the initial testing setup as described in the setup section
  • Installed all necessary dependencies and have a working development environment

Solution Overview

Full working code for this example can be found in the Osmosis Concentrated Liquidity example.

Our solution includes the following:

  • We create three accounts on Osmosis
    • CL Input holds tokens ready to join the pool
    • CL Output holds the position of the pool
    • Final Output holds tokens after they've been withdrawn from the pool
  • We instantiate the Concentrated Liquidity Provider and Concentrated Liquidity Withdrawer libraries on Osmosis
    • The Liquidity Provider library will draw tokens from the CL Input account and use them to enter the pool
    • The Liquidity Withdrawer library will exit the pool from the position held in the CL Output account and deposit redeemed tokens to the Final Output account
  • We add two permissionless authorizations on Neutron:
    • Provide Liquidity: When executed, it'll call the the provide liquidity function
    • Withdraw Liquidity: When executed, it'll call the withdraw liquidity function

The following is a visual representation of the system we are building:

graph TD;
    subgraph Osmosis
        A1((CL Input))
        A2((CL Output))
        A3((Final Output))
        L1[Liquidity Provider]
        L2[Liquidity Withdrawer]
        EP[Processor]
    end

    subgraph Neutron
        A[Authorizations]
        MP[Processor]
    end

    A1 --> L1 --> A2
    A2 --> L2 --> A3

    User --Execute Msg--> A --Enqueue Batch --> EP
    EP --> L1
    EP --> L2

Code walkthrough

Before we begin, we set up the TestContext as explained in the previous setup section. Then we can move on to steps pertinent to testing this example.

1. Setting up the program

1.1 Set up the Concentrated Liquidity pool on Osmosis

#![allow(unused)]
fn main() {
let ntrn_on_osmo_denom = test_ctx
    .get_ibc_denom()
    .base_denom(NEUTRON_CHAIN_DENOM.to_owned())
    .src(NEUTRON_CHAIN_NAME)
    .dest(OSMOSIS_CHAIN_NAME)
    .get();

let pool_id = setup_cl_pool(&mut test_ctx, &ntrn_on_osmo_denom, OSMOSIS_CHAIN_DENOM)?;
}

This sets up a CL pool on Osmosis using NTRN and OSMO as the trading pair. Because NTRN on Osmosis will be transferred over IBC, a helper function is used to get the correct denom on Osmosis.

1.2 Set up the Program config builder and prepare the relevant accounts

The Program Manager uses a builder pattern to construct the program configuration. We set up the three accounts that will be used in the liquidity provision and withdrawal flow.

#![allow(unused)]
fn main() {
let mut builder = ProgramConfigBuilder::new(NEUTRON_CHAIN_ADMIN_ADDR.to_string());
let osmo_domain = Domain::CosmosCosmwasm(OSMOSIS_CHAIN_NAME.to_string());
let ntrn_domain = Domain::CosmosCosmwasm(NEUTRON_CHAIN_NAME.to_string());

// Create account information for LP input, LP output and final (LW) output accounts
let cl_input_acc_info = AccountInfo::new("cl_input".to_string(), &osmo_domain, AccountType::default());
let cl_output_acc_info = AccountInfo::new("cl_output".to_string(), &osmo_domain, AccountType::default());
let final_output_acc_info = AccountInfo::new("final_output".to_string(), &osmo_domain, AccountType::default());

// Add accounts to builder
let cl_input_acc = builder.add_account(cl_input_acc_info);
let cl_output_acc = builder.add_account(cl_output_acc_info);
let final_output_acc = builder.add_account(final_output_acc_info);
}

1.3 Configure the libraries

Next we configure the libraries for providing and withdrawing liquidity. Each library is configured with input and output accounts and specific parameters for their operation.

Note how cl_output_acc serves a different purpose for each of those libraries:

  • for liquidity provider library it is the output account
  • for liquidity withdrawer library it is the input account
#![allow(unused)]
fn main() {
// Configure Liquidity Provider library
let cl_lper_config = LibraryConfig::ValenceOsmosisClLper({
    input_addr: cl_input_acc.clone(),
    output_addr: cl_output_acc.clone(),
    lp_config: LiquidityProviderConfig {
        pool_id: pool_id.into(),
        pool_asset_1: ntrn_on_osmo_denom.to_string(),
        pool_asset_2: OSMOSIS_CHAIN_DENOM.to_string(),
        global_tick_range: TickRange {
            lower_tick: Int64::from(-1_000_000),
            upper_tick: Int64::from(1_000_000),
        },
    },
});

// Configure Liquidity Withdrawer library
let cl_lwer_config = LibraryConfig::ValenceOsmosisClWithdrawer({
    input_addr: cl_output_acc.clone(),
    output_addr: final_output_acc.clone(),
    pool_id: pool_id.into(),
});

// Add libraries to builder
let cl_lper_library = builder.add_library(LibraryInfo::new(
    "test_cl_lper".to_string(),
    &osmo_domain,
    cl_lper_config,
));

let cl_lwer_library = builder.add_library(LibraryInfo::new(
    "test_cl_lwer".to_string(),
    &osmo_domain,
    cl_lwer_config,
));
}

Input links (first array in the add_link() call) are meant to enable libraries permission to execute on the specified accounts. Output links specify where the fungible results of a given function execution should be routed to.

#![allow(unused)]
fn main() {
// Link input account -> liquidity provider -> output account
builder.add_link(&cl_lper_library, vec![&cl_input_acc], vec![&cl_output_acc]);
// Link output account -> liquidity withdrawer -> final output account
builder.add_link(&cl_lwer_library, vec![&cl_output_acc], vec![&final_output_acc]);
}

1.5 Create authorizations

Next we create authorizations for both providing and withdrawing liquidity. Each authorization contains a subroutine that specifies which function to call on which library. By default, calling these subroutines will be permissionless, however using the AuthorizationBuilder we can constrain the authorizations as necessary.

#![allow(unused)]
fn main() {
builder.add_authorization(
    AuthorizationBuilder::new()
        .with_label("provide_liquidity")
        .with_subroutine(
            AtomicSubroutineBuilder::new()
                .with_function(cl_lper_function)
                .build(),
        )
        .build(),
);

builder.add_authorization(
    AuthorizationBuilder::new()
        .with_label("withdraw_liquidity")
        .with_subroutine(
            AtomicSubroutineBuilder::new()
                .with_function(cl_lwer_function)
                .build(),
        )
        .build(),
);
}

1.6 Set up the Polytone connections

In order for cross-domain Programs to be able to communicate between different domains, we instantiate the Polytone contracts and save the configuration in our Program Manager.

setup_polytone sets up the connection between two domains and therefore expects the following parameters:

  • source and destination chain names
  • source and destination chain ids
  • source and destination chain native denoms
#![allow(unused)]
fn main() {
// prior to initializing the manager, we do the middleware plumbing
setup_polytone(
    &mut test_ctx,
    NEUTRON_CHAIN_NAME,
    OSMOSIS_CHAIN_NAME,
    NEUTRON_CHAIN_ID,
    OSMOSIS_CHAIN_ID,
    NEUTRON_CHAIN_DENOM,
    OSMOSIS_CHAIN_DENOM,
)?;
}

1.7 Initialize the program

Calling builder.build() here acts as a snapshot of the existing builder state.

That state is then passed on to the use_manager_init() call, which consumes it and builds the final program configuration before initializing it.

#![allow(unused)]
fn main() {
let mut program_config = builder.build();
use_manager_init(&mut program_config)?;
}

Congratulations! The program is now initialized across the two chains!

2. Executing the Program

After the initialization, we are ready to start processing messages. For a message to be executed, it first needs to be enqueued to the processor.

2.1 Providing Liquidity

If there are tokens available in the CL Input account, we are ready to provide liquidity. To enqueue provide liquidity message:

#![allow(unused)]
fn main() {
// build the processor message for providing liquidity
let lp_message = ProcessorMessage::CosmwasmExecuteMsg {
    msg: Binary::from(serde_json::to_vec(
        &valence_library_utils::msg::ExecuteMsg::<_, ()>::ProcessFunction(
            valence_osmosis_cl_lper::msg::FunctionMsgs::ProvideLiquidityDefault {
                bucket_amount: Uint64::new(10),
            },
        ),
    )?),
};

// wrap the processor message in an authorization module call
let provide_liquidity_msg = valence_authorization_utils::msg::ExecuteMsg::PermissionlessAction(
    valence_authorization_utils::msg::PermissionlessMsg::SendMsgs {
        label: "provide_liquidity".to_string(),
        messages: vec![lp_message],
        ttl: None,
    },
);

contract_execute(
    test_ctx
        .get_request_builder()
        .get_request_builder(NEUTRON_CHAIN_NAME),
    &authorization_contract_address,
    DEFAULT_KEY,
    &serde_json::to_string(&provide_liquidity_msg)?,
    GAS_FLAGS,
)?;
}

Now anyone can tick the processor to execute the message. After receiving a tick, the processor will execute the message at the head of the queue and send a callback to the authorization contract with the result.

#![allow(unused)]
fn main() {
contract_execute(
    test_ctx
        .get_request_builder()
        .get_request_builder(OSMOSIS_CHAIN_NAME),
    &osmo_processor_contract_address,
    DEFAULT_KEY,
    &serde_json::to_string(
        &valence_processor_utils::msg::ExecuteMsg::PermissionlessAction(
            valence_processor_utils::msg::PermissionlessMsg::Tick {},
        ),
    )?,
    &format!(
        "--gas=auto --gas-adjustment=3.0 --fees {}{}",
        5_000_000, OSMOSIS_CHAIN_DENOM
    ),
)?;
}

2.2 Withdraw Liquidity

To enqueue withdraw liquidity message:

#![allow(unused)]
fn main() {
// build the processor message for withdrawing liquidity
let lw_message = ProcessorMessage::CosmwasmExecuteMsg {
    msg: Binary::from(serde_json::to_vec(
        &valence_library_utils::msg::ExecuteMsg::<_, ()>::ProcessFunction(
            valence_osmosis_cl_withdrawer::msg::FunctionMsgs::WithdrawLiquidity {
                position_id: output_acc_cl_position.position_id.into(),
                liquidity_amount: Some(liquidity_amount),
            },
        ),
    )?),
};

// wrap the processor message in an authorization module call
let withdraw_liquidity_msg = valence_authorization_utils::msg::ExecuteMsg::PermissionlessAction(
    valence_authorization_utils::msg::PermissionlessMsg::SendMsgs {
        label: "withdraw_liquidity".to_string(),
        messages: vec![lw_message],
        ttl: None,
    },
);

contract_execute(
    test_ctx
        .get_request_builder()
        .get_request_builder(NEUTRON_CHAIN_NAME),
    &authorization_contract_address,
    DEFAULT_KEY,
    &serde_json::to_string(&withdraw_liquidity_msg)?,
    GAS_FLAGS,
)?;
}

The above enqueues the message to withdraw liquidity. The processor will execute it next time it is ticked.

#![allow(unused)]
fn main() {
contract_execute(
    test_ctx
        .get_request_builder()
        .get_request_builder(OSMOSIS_CHAIN_NAME),
    &osmo_processor_contract_address,
    DEFAULT_KEY,
    &serde_json::to_string(
        &valence_processor_utils::msg::ExecuteMsg::PermissionlessAction(
            valence_processor_utils::msg::PermissionlessMsg::Tick {},
        ),
    )?,
    &format!(
        "--gas=auto --gas-adjustment=3.0 --fees {}{}",
        5_000_000, OSMOSIS_CHAIN_DENOM
    ),
)?;
}

This concludes the walkthrough. You have now initialized the program and used it to provide and withdraw liquidity on Osmosis from Neutron!

Security

Valence Programs have been independently audited. Please find audit reports here.

If you believe you've found a security-related issue with Valence Programs, please disclose responsibly by contacting the Timewave team at security@timewave.computer.