🚧 This in-progress document contains information about the design of a cross-chain automation system.

Introduction

The Valence Protocol is a framework designed to help you build trust-minimized applications, called Valence programs, executing across multiple chains. Valence programs are:

  • Easy to understand and quick to deploy: a program can be set up with a configuration file and no code.
  • Extensible: if we don't support a DeFi integration out of the box, you can write one yourself in a matter of hours!

👉 Example Use-case:

A DAO wants to bridge tokens to another chain and then deposit the tokens into a vault. After a certain date, it wants to allow a governance proposal to trigger unwinding of the position. While the position is active, It may also want to delegate the right to change vault parameters to a specific committee as long as the parameters are within a certain range.

Without Valence Programs, the DAO would have two choices:
Choice 1: Give the tokens to a multisig to execute actions on the DAO's behalf
Choice 2: Write custom smart contracts, and deployed them across multiple chains, to handle the cross-chain token operations.

Valence programs offer a third choice: the DAO does not need to trust a multisig, nor does it need to spend resources writing complex cross-chain logic.

Leveraging the Valence Protocol allows the DAO to rapidly configure and deploy a solution that meets its needs.

High-level overview

This section provides a high-level breakdown of the components that compose a Valence cross-chain program.

Domains

A Valence Program is an instance of the Valence Protocol. A Valence program's execution can typically span multiple Blockchains. In the Valence Protocol, we refer to the various Blockchains that are supported as domains.

A Domain is an environment in which the components that form a program (more on these later) can be instantiated (deployed).

Domains are defined by three properties:

  1. The Chain: the Blockchain's name e.g. Neutron, Osmosis, Ethereum mainnet.
  2. The Execution environment: the environment under which programs (typically smart contracts) can be executed on that particular chain e.g. CosmWasm, EVM, SVM.
  3. The type of Bridge used from the Main Domain to other domains e.g. Polytone over IBC, Hyperlane.

Within a particular ecosystem of Blockchains (e.g. Cosmos), the Valence Protocol usually defines one specific domain as the Main Domain, on which some supporting infrastructure components are deployed. Think of it as the home base supporting the execution and operations of a Valence programs. This will be further clarified in the Authorizations & Processors section.

Below is a simplified representation of a program transferring tokens from a given input account on the Neutron domain, a CosmWasm-enabled smart contract platform secured by the Cosmos Hub, to a specified output account on the Osmosis domain, a well-known DeFi platform in the Cosmos ecosystem.

---
title: Valence cross-domain program
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  subgraph Neutron
  IA
  end
  subgraph Osmosis
  IA -- 4/Transfer tokens --> OA
  end

Accounts

Valence Programs usually perform operations on tokens, accross multiple domains, and to ensure that the funds remain safe throughout a program's execution, they rely on a primitive called Valence Accounts.

A Valence Account is an escrow contract that can hold balances for various supported token-types (e.g., in Cosmos ics-20 or cw-20) and ensures that only a restricted set of operations can be performed on the held tokens. A Valence Account is created (instantiated) on a specific domain, and bound to a specific Valence Program. Valence Programs will typically use multiple accounts, during the program's lifecycle, for different purposes. Valence Accounts are generic in nature; how they are used to form a program is entirely up to the program's creator.

Using a simple token swap program as an example: the program receives an amount of Token A in an input account, and will swap these Token A for Token B using a DEX on the same domain (e.g. Neutron). After the swap operation, the received amount of Token B will be temporarily held in a transfer account, before being transfered to a final output account on another domain (e.g. Osmosis).

For this, the program will create the following accounts:

  • A Valence Account is created on the Neutron domain to act as the Input account.
  • A Valence Account is created on the Neutron domain to act as the Transfer account.
  • A Valence Account is created on the Osmosis domain to act as the Output account.
---
title: Valence token swap program
---
graph LR
  IA((Input
    Account))
  TA((Transfer
    Account))
  OA((Output
	Account))
  DEX
  subgraph Neutron
  IA -- Swap Token A --> DEX
  DEX -- Token B --> TA
  end
  subgraph Osmosis
  TA -- 4/Transfer token B --> OA
  end

Note: this is a simplified representation.

Valence Accounts do not perform any operation by themselves on the held funds, the operations are performed by Valence Libraries.

Libraries and Functions

Valence Libraries contain the business logic that can be applied to the funds held by Valence accounts. Most often, this logic is about performing operations on tokens, such as splitting, routing, or providing liquidity on a DEX. A Valence account has to first approve (authorize) a Valence library for it to perform operations on that account's balances. Valence Libraries expose Functions that it supports. Valence Programs can be composed of a more or less complex graph of Valence Accounts and Valence Libraries to form a more or less sophisticated cross-chain workflow. During the course of a Valence program's execution, Functions are called by external parties, and trigger the library's operations on the linked accounts.

A typical pattern for a Valence Library is to have one (or more) input account(s) and one (or more) output account(s). While many libraries implement this pattern, it is by no means a requirement.

Valence Libraries play a critical role in integrating Valence Programs with existing decentralized apps and services that can be found in many Blockchain ecosystems e.g. DEXes, liquid staking, etc

Now that we know accounts cannot perform any operations by themselves, we need to revisit the token swap program example (mentioned on the Accounts page) and bring Valence Libraries into the picture: the program receives an amount of Token A in an input account, and a Token Swap library exposes a swap function that, when called, will perform a swap operation of Token A held by the input account for Token B using a DEX on the same domain (e.g. Neutron), and transfer them to the transfer account. A Token Transfer library, that exposes a transfer function, will transfer the Token B amount (when the function is called) to a final output account on another domain (e.g. Osmosis). In this scenario, the DEX is an existing service found on the hosting domain (e.g. Astroport on Neutron), so it is not part of the Valence Protocol.

The program is then composed of the following accounts & libraries:

  • A Valence Account is created on the Neutron domain to act as the Input account.
  • A Valence Account is created on the Neutron domain to act as the Transfer account.
  • A Token swap Valence Library is created on the Neutron domain, authorized by the Input Account (to be able to act on the held Token A balance), and configured with the Input account and Transfer account as the respective input and output for the swap operation.
  • A Token Transfer Valence Library is created on the Neutron domain, authorized by the Transfer Account (to be able to act on the held Token B balance), and configured with the Transfer account and Output account as the respective input and output for the swap operation.
  • A Valence Account is created on the Osmosis domain to act as the Output account.
---
title: Valence token swap program
---
graph LR
  FC[[Function call]]
  IA((Input
	Account))
  TA((Transfer
	Account))
  OA((Output
	Account))
  TS((Token
  	Swap Library))
  TT((Token
  	Transfer Library))
  DEX
  subgraph Neutron
  FC -- 1/Swap --> TS
  TS -- Swap Token A --> IA
  IA -- Token A --> DEX
  DEX -- Token B --> TA
  FC -- 2/Transfer --> TT
  TT -- Transfer Token B --> TA
  end
  subgraph Osmosis
  TA -- Token B --> OA
  end

This example highlights the crucial role that Valence Libraries play for integrating Valence Programs with pre-existing decentralized apps and services.

One thing remains unclear in this example, though: how are Functions called? This is where Programs and Authorizations come into the picture.

Programs and Authorizations

A Valence Program is an instance of the Valence Protocol. It is a particular arrangement and configuration of accounts and libraries across multiple domains e.g. a POL (protocol-owned liquidity) lending relationship between two parties. Similarly to how a library exposes executable functions, programs are associated with a set of executable Subroutines.

A Subroutine is a vector of Functions. A Subroutine can call out to one or more function(s) from a single library, or from different ones. A Subroutine is limited to one execution domain (i.e. it cannot use functions from libraries instantiated on multiple domains).

A Subroutine can be:

  • Non Atomic, i.e., execute function one, if it succeeds execute function two, and if that succeeds do function three, and so on.
  • or Atomic, i.e., execution function one, function two, and function three, and if any of them fail then revert all steps.

Valence programs are typically used to implement complex, cross-chain workflows performing financial operations, in a trust-minimized way, on funds provided by various third-parties. Therefore, it goes without saying that a program's subroutines should not (all and/or always) be allowed to be executed by just about anyone.

To specify fine-grained controls over who can initiate the execution of a Subroutine, program creators use the Authorizations module.

The Authorizations module is a powerful and flexible system that supports simple to advanced access control configuration schemes, such as:

  • Anyone can initiate execution of a Subroutine
  • Only permissioned actors can initiate execution of a Subroutine
  • Execution can only be initiated after a starting timestamp/block height
  • Execution can only be initiated up to a certain timestamp/block height
  • Authorizations are tokenized, which means they can be transferred by the holder or used in more sophisticated DeFi scenarios
  • Authorizations can expire
  • Authorizations can be enabled/disabled
  • Authorizations can tightly constrain parameters. For example, an authorization to execute a token transfer message can limit the execution to only supply the amount argument, and not the denom or receiver in the transfer message

To support the on-chain execution of Valence Programs, the Valence Protocol provides two important contracts: the Authorizations contract and the Processor contract.

The Authorizations contract is the entry point for users. The user sends a set of messages to the Authorizations contract and the label (id) of the authorization they want to execute. The Authorizations contract then verifies that the sender is authorized, that the messages are valid, constructs a MessageBatch based on the subroutine and passes this batch to the Processor contract for execution. The authority to execute any Subroutine is tokenized so that these tokens can be transferred on-chain.

The Processor contract receives a MessageBatch and executes the contained Messages in sequence. It does this by maintaining execution queues, where the queue items are Subroutines. The processor exposes a Tick message that allows anyone to trigger the processor, whereby the first batch of the queue is executed or moved to the back of the queue if it's not executable yet (e.g. retry period has not passed).

graph LR;
	User --> |Subroutine| Auth(Authorizations)
	Auth --> |Message Batch| P(Processor)
	P --> |Function 1| S1[Library 1]
	P --> |Function 2| S2[Library 2]
	P --> |Function N| S3[Library N]

WIP: Middleware

The Valence Middleware is a set of components that provide a unified interface for the Valence Type system.

At its core, middleware is made up from the following components.

Design goals

TODO: describe modifiable middleware, design goals and philosophy behind it

These means are achieved with three key components:

  • brokers
  • type registries
  • Valence types

Middleware Brokers

Middleware brokers are responsible for managing the lifecycle of middleware instances and their associated types.

Middleware Type Registries

Middleware Type Registries are responsible for unifying a set of foreign types to be used in Valence Programs.

Valence Types

Valence Types are the canonical representations of various external domain implementations of some types.

Authorizations & Processors

The Authorizations and Processor contracts are foundational pieces of the Valence Protocol, as they enable on-chain (and cross-chain) execution of Valence Programs, and enforce access control to the programs's subroutines via authorizations.

This section goes into more details in explaining the rationale for these contracts, and shares insights about their technical implementation as well as how end-users can interact with Valence programs via authorizations.

Rationale

  • To have a general purpose set of smart contracts that will provide the users (anyone if the authorization is permissionless or authorization token owners if it’s permissioned) with a single point of entry to interact with the Valence program, which can have libraries and accounts deployed on multiple chains.
  • To have all the user authorizations for multiple domains in a single place, making it very easy to control the application.
  • To have a single address (Processor) that will execute the messages for all the contracts in a domain using execution queues.
  • To only tick a single contract (Processor) which will go through the queues to route and execute the messages.
  • To be able to create, edit or remove different application permissions with ease.

Technical deep-dive:

Assumptions

  • Funds: You cannot send funds with the messages.

  • Bridging: We are assuming that messages can be sent and confirmed bidirectionally between domains. From the authorization contract on the main domain to the processor in a different domain in one direction and the callback confirming the correct or failed execution in the other direction.

  • Instantiation: All these contracts can be instantiated beforehand and off-chain having predictable addresses. Here is an example instantiation flow using Polytone:

    • Predict authorization contract address
    • Instantiate polytone contracts & set up relayers.
    • Predict proxy contract address for the authorization contract on each external domain.
    • Predict proxy contract address on the main domain for each processor on external domains.
    • Instantiate all processors. The sender on external domains will be the predicted proxy and on the main domain it will be the authorization contract iself.
    • Instantiate authorization contract with all the processors and their predicted proxies for external domains and the processor on the main domain.
  • Relaying: relayers will be running once everything is instantiated.

  • Tokenfactory: the main domain has the token factory module with no token creation fee so that we can create and mint these non fungible tokens with no additional cost.

  • Domains: in the current version, actions in each authorization will be limited to a single domain.

Processor

The Processor will be a contract on each domain of our workflow. It handles the execution queues which contain Message Batches. The Processor can be ticked permissionlessly, which will execute the next Message Batch in the queue if this one is executable or rotate it to the back of the queue if it isn't executable yet. The processor will also handle the Retry logic for each batch (if the batch is atomic) or function (if the batch is non atomic). After a Message Batch has been executed successfully or it reached the maximum amount of retries, it will be removed from the execution queue and the Processor will send a callback with the execution information to the Authorization contract.

The processors will be instantiated in advance with the correct address that can send messages to them, according to the InstantiationFlow described in the Assumptions section.

The Authorization contract will be the only address allowed to add list of functions to the execution queues. It will also be allowed to Pause/Resume the Processor or to arbitrarily remove functions from the queues or add certain messages at a specific position.

There will be two execution queues: one High and one Med. This will allow giving different priorities to Message.

Execution

When a processor is Ticked we will take the first MessageBatch from the queue (High if there are batches there or Med if there aren’t). After taking them, we will execute them in different ways depending if the batch is Atomic or NonAtomic.

  • For Atomic batches, the Processor will execute them by sending them to itself and trying to execute them in a Fire and Forget manner. If this execution fails, we will check the RetryLogic of the batch to decide if they are to be re-queued or not (if not, we will send a callback with Rejected status to the authorization contract). If they succeeded we will send a callback with Executed status to the Authorization contract.
  • For NonAtomic batches, we will execute the functions one by one and applying the RetryLogic individually to each function if they fail. NonAtomic functions might also be confirmed via CallbackConfirmations in which case we will keep them in a separate Map until we receive that specific callback. Each time a function is confirmed, we will re-queue the batch and keep track of what function we have to execute next. If at some point a function uses up all its retries, we will send a callback to the Authorization contract with a PartiallyExecuted(num_of_functions_executed) status. If all of them succeed it will be Executed and if none of them were it will be Rejected. For NonAtomic batches, we need to tick the processor each time the batch is at the top of the queue to continue, so we will need at least as many ticks as number of functions we have in the batch, and each function has to wait for its turn.

Storage

The Processor will receive batches of messages from the authorization contract and will enqueue them in a custom storage structure we designed for this purpose, called a QueueMap. This structure is a FIFO queue with owner privileges (allows the owner to insert or remove from any position in the queue). Each “item” stored in the queue is an object MessageBatch that looks like this:

#![allow(unused)]
fn main() {
pub struct MessageBatch {
    pub id: u64,
    pub msgs: Vec<ProcessorMessage>,
    pub subroutine: Subroutine,
    pub priority: Priority,
    pub retry: Option<CurrentRetry>,
}
}
  • id: represents the global id of the batch. The Authorization contract, to understand the callbacks that it will receive from each processor, identifies each batch with an id. This id is unique for the entire application.
  • msgs: the messages the processor needs to execute for this batch (e.g. a CosmWasm ExecuteMsg or MigrateMsg).
  • subroutine: This is the config that the authorization table defines for the execution of these functions. With this field we can know if the functions need to be executed atomically or not atomically, for example, and the retry logic for each batch/function depending on the config type.
  • priority (for internal use): batches will be queued in different priority queues when they are received from the authorization contract. We also keep this priority here because they might need to be re-queued after a failed execution and we need to know where to re-queue them.
  • retry (for internal use): we are keeping the current retry we are at (if the execution previously failed) to know when to abort if we exceed the max retry amounts.

Authorization

The authorization contract will be a single contract deployed on the main domain and that will define the authorizations of the top-level application, which can include libraries in different domains (chains). For each domain, there will be one Processor (with its corresponding execution queues). The Authorization contract will connect to all of the Processors using a connector (e.g. Polytone, Hyperlane…) and will route the Message Batches to be executed to the right domain. At the same time, for each external domain, we will have a proxy contract in the main domain which will receive the callbacks sent from the processor on the external domain with the ExecutionResult of the Message Batch.

The contract will be instantiated once at the very beginning and will be used during the entire top-level application lifetime. Users will never interact with the individual Smart Contracts of each workflow, but with the Authorization contract directly.

Instantiation

When the contract is instantiated, it will be provided the following information:

  • Processor contract on main domain.

  • [(Domain, Connector(Polytone_note_contract), Processor_contract_on_domain, callback_proxy, IBC_Timeout_settings)]: If it's a cross domain application, an array will be passed with each external domain label and its corresponding connector contracts and proxies that will be instantiated before hand. For each connector, there will be also a proxy corresponding to that external domain because it’s a two-way communication flow and we need to receive callbacks. Additionally, we need a set of Timeout settings for the bridge, to know for how long the messages sent through the connector are going to be valid.

  • Admin of the contract (if different to sender).

The instantiation will set up all the processors on each domain so that we can start instantiating the libraries afterwards and providing the correct Processor addresses to each of them depending on which domain they are in.

Owner Functions

  • create_authorizations(vec[Authorization]): provides an authorization list which is the core information of the authorization contract, it will include all the possible set of functions that can be executed. It will contain the following information:

    • Label: unique name of the authorization. This label will be used to identify the authorization and will be used as subdenom of the tokenfactory token in case it is permissioned. Due to tokenfactory module restrictions, the max length of this field is 44 characters. Example: If the label is withdraw and only address neutron123 is allowed to execute this authorization, we will create the token factory/<contract_addr>/withdraw and mint one to that address. If withdraw was permissionless, there is no need for any token, so it's not created.

    • Mode: can either be Permissioned or Permissionless. If Permissionless is chosen, any address can execute this function list. In case of Permissioned, we will also say what type of permissioned type we want (with CallLimit or without), a list of addresses will be provided for both cases. In case there is a CallLimit we will mint a certain amount of tokens for each address that is passed, in case there isn’t we will only mint one token and that token will be used all the time.

    • NotBefore: from what time the authorization can be executed. We can specify a block height or a timestamp.

    • Expiration: until when (what block or timestamp) this authorization is valid.

    • MaxConcurrentExecutions (default 1): to avoid DDoS attacks and to clog the execution queues, we will allow certain authorizations subroutines to be present a maximum amount of times (default 1 unless overwritten) in the execution queue.

    • Subroutine: set of functions in a specific order to be executed. Subroutines can be of two types: Atomic or NonAtomic. For the Atomic subroutines, we will provide an array of Atomic functions and an optional RetryLogic for the entire subroutine. For the NonAtomic subroutines we will just provide an array of NonAtomic functions.

      • AtomicFunction: each Atomic function has the following parameters:

        • Domain of execution (must be the same for all functions in v1).

        • MessageDetails: type (e.g. CosmWasmExecuteMsg) and message (name of the message in the ExecuteMsg json that can be executed with, if applied, three list of parameters: one for MustBeIncluded, one for CannotBeIncluded and one for MustBeValue. (This gives more control over the authorizations. Example: we want one authorization to provide the message with parameters (admin function for that service) but another authorization for the message without any Parameters (user function for that service).

        • Contract address that will execute it.

      • NonAtomicFunction: each NonAtomic function has the following parameters:

        • Domain of execution

        • MessageDetails (like above).

        • Contract address that will execute it.

        • RetryLogic (optional, self-explanatory).

        • CallbackConfirmation (optional): This defines if a NonAtomicFunction is completed after receiving a callback (Binary) from a specific address instead of after a correct execution. This is used in case of the correct message execution not being enough to consider the message completed, so it will define what callback we should receive from a specific address to flag that message as completed. For this, the processor will append an execution_id to the message which will be also passed in the callback by the service to identify what function this callback is for.

    • Priority (default Med): priority of a set of functions can be set to High. If this is the case, they will go into a preferential execution queue. Messages in the High priority queue will be taken over messages in the Med priority queue. All authorizations will have an initial state of Enabled .

    Here is an example of an Authorization table after its creation:

    Authorization Table

  • add_external_domains([external_domains]): if we want to add external domains after instantiation.

  • modify_authorization(label, updated_values): can modify certain updatable fields of the authorization: start_time, expiration, max_concurrent_executions and priority.

  • disable_authorization(label): puts an Authorization to state Disabled. These authorizations can not be run anymore.

  • enable_authorization(label): puts an Authorization to state Enabled so that they can be run again.

  • mint_authorization(label, vec[(addresses, Optional: amounts)]): if the authorization is Permissioned with CallLimit: true, this function will mint the corresponding token amounts of that authorization to the addresses provided. If CallLimit: false it will mint 1 token to the new addresses provided.

  • pause_processor(domain): pause the processor of the domain.

  • resume_processor(domain): resume the processor of the domain.

  • insert_messages(label, queue_position, queue_type, vec[ProcessorMessage]): adds these set of messages to the queue at a specific position in the queue.

  • evict_messages(label, queue_position, queue_type): remove the set of messages from the specific position in a queue.

  • add_sub_owners(vec[addresses]): add the current addresses as 2nd tier owners. These sub_owners can do everything except adding/removing admins.

  • remove_sub_owners(vec[addresses]): remove these addresses from the sub_owner list.

User Actions

  • send_msgs(label, vec[ProcessorMessage]): users can run an authorization with a specific label. If the authorization is Permissioned, the authorization contract will check if they are allowed to execute it by checking that the user has the token in its wallet if it's Permissioned (without limit) or that the user sent the token along with the messages if it's Permissioned (with limit). Along with the authorization label, the user will provide an array of encoded messages, together with the message type (e.g. CosmwasmExecuteMsg) and any other parameters for that specific ProcessorMessage (e.g. for a CosmwasmMigrateMsg we need to also pass a code_id). The contract will then check that the messages match the ones defined in the authorization (and in the correct order) and that all Parameters restrictions, if applied, are correct.

    If all checks are correct, the contract will route the messages to the correct Processor with an execution_id for the processor to callback with. This execution_id is unique for the entire application. If the execution of all the actions are confirmed via a callback, we will burn the token and if they fail, we will send the token back. Here is an example flowchart of how a user interacts with the authorization contract to execute messages in a service sitting on a domain:

User flowchart

Callbacks

There are different types of callbacks in our application. Each of them have a specific function and are used in different parts of the application.

Function Callbacks

For the execution of NonAtomic batches, each function in the batch can optionally be confirmed with a callback from a specific address. When the processor reaches a function that requires a callback, it will inject the execution_id of the batch into the message that is going to be executed on the library, which means that the library needs to be ready to receive that execution_id and know what the expected callback is and from where it has to come from to confirm that function, otherwise that function will stay unconfirmed and the batch will not move to the next function. The callback will be sent to the processor with the execution_id so that the processor can know what function is being confirmed. The processor will then validate that the correct callback was received from the correct address.

If the processor receives the expected callback from the correct address, the batch will move to the next function. If it receives a different callback than expected from that address, the execution of that function will be considered failed and it will be retried (if applicable). In any case, a callback must be received to determine if the function was successful or not.

Processor Callbacks

Once a Processor batch is executed or it fails and there are no more retries available, the Processor will send a callback to the Authorizations contract with the execution_id of the batch and the result of the execution. All this information will be stored in the Authorization contract state so the history of all executions can be queried from it. This is how a ProcessorCallback looks like:

#![allow(unused)]
fn main() {
pub struct ProcessorCallbackInfo {
    // Execution ID that the callback was for
    pub execution_id: u64,
    // Who started this operation, used for tokenfactory actions
    pub initiator: OperationInitiator,
    // Address that can send a bridge timeout or success for the message (if applied)
    pub bridge_callback_address: Option<Addr>,
    // Address that will send the callback for the processor
    pub processor_callback_address: Addr,
    // Domain that the callback came from
    pub domain: Domain,
    // Label of the authorization
    pub label: String,
    // Messages that were sent to the processor
    pub messages: Vec<ProcessorMessage>,
    // Optional ttl for re-sending in case of bridged timeouts
    pub ttl: Option<Expiration>,
    // Result of the execution
    pub execution_result: ExecutionResult,
}

pub enum ExecutionResult {
    InProcess,
    // Everthing executed successfully
    Success,
    // Execution was rejected, and the reason
    Rejected(String),
    // Partially executed, for non-atomic function batches
    // Indicates how many functions were executed and the reason the next function was not executed
    PartiallyExecuted(usize, String),
    // Removed by Owner - happens when, from the authorization contract, a remove item from queue is sent
    RemovedByOwner,
    // Timeout - happens when the bridged message times out
    // We'll use a flag to indicate if the timeout is retriable or not
    // true - retriable
    // false - not retriable
    Timeout(bool),
    // Unexpected error that should never happen but we'll store it here if it ever does
    UnexpectedError(String),
}
}

The key information from here is the label, to identify the authorization that was executed; the messages, to identify what the user sent; and the execution_result, to know if the execution was successful, partially successful or rejected.

Bridge Callbacks

When messages need to be sent through bridges because we are executing batches on external domains, we need to know if, for example, a timeout happened and keep track of it. For this reason we have callbacks per bridge that we support and specific logic that will be executed if they are received. For Polytone timeouts, we will check if the ttl field has not expired and allow permissionless retries if it's still valid. In case the ttl has expired, we will set the ExecutionResult to timeout and not retriable, then send the authorization token back to the user if the user sent it to execute the authorization.

Libraries

This section contains a detailed description of the various libraries that can be used to rapidly build Valence cross-chain programs.

Valence Protocol libraries:

Astroport LPer library

The Valence Astroport LPer library library allows to provide liquidity into an Astroport Liquidity Pool from an input account and deposit the LP tokens into an output account.

High-level flow

---
title: Astroport Liquidity Provider
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Astroport
      Liquidity
      Provider]
  AP[Astroport
     Pool]
  P -- 1/Provide Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Provide Liquidity --> IA
  IA -- 5/Provide Liquidity
				  [Tokens] --> AP
  AP -- 5'/Transfer LP Tokens --> OA

Functions

FunctionParametersDescription
ProvideDoubleSidedLiquidityexpected_pool_ratio_range: Option<DecimalRange>Provide double-sided liquidity to the pre-configured Astroport Pool from the input account, and deposit the LP tokens into the output account. Abort it the pool ratio is not within the expected_pool_ratio range (if specified).
ProvideSingleSidedLiquidityasset: String
limit: Option<Uint128>
expected_pool_ratio_range: Option<DecimalRange>
Provide single-sided liquidity for the specified asset to the pre-configured Astroport Pool from the input account, and deposit the LP tokens into the output account. Abort it the pool ratio is not within the expected_pool_ratio range (if specified).

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are LPed
    pub input_addr: LibraryAccountType,
    // Account to which the LP tokens are forwarded
    pub output_addr: LibraryAccountType,
    // Pool address
    pub pool_addr: String,
    // LP configuration
    pub lp_config: LiquidityProviderConfig,
}

pub struct LiquidityProviderConfig {
    // Pool type, old Astroport pools use Cw20 lp tokens and new pools use native tokens, so we specify here what kind of token we are going to get.
    // We also provide the PairType structure of the right Astroport version that we are going to use for each scenario
    pub pool_type: PoolType,
    // Denoms of both native assets we are going to provide liquidity for
    pub asset_data: AssetData,
    // Slippage tolerance
    pub slippage_tolerance: Option<Decimal>,
}

#[cw_serde]
pub enum PoolType {
    NativeLpToken(valence_astroport_utils::astroport_native_lp_token::PairType),
    Cw20LpToken(valence_astroport_utils::astroport_cw20_lp_token::PairType),
}


pub struct AssetData {
    pub asset1: String,
    pub asset2: String,
}
}

Astroport Withdrawer library

The Valence Astroport Withdrawer library library allows to withdraw liquidity from an Astroport Liquidity Pool from an input account an deposit the withdrawed tokens into an output account.

High-level flow

---
title: Astroport Liquidity Withdrawal
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Astroport
      Liquidity
      Withdrawal]
  AP[Astroport
     Pool]
  P -- 1/Withdraw Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Withdraw Liquidity --> IA
  IA -- 5/Withdraw Liquidity
				  [LP Tokens] --> AP
  AP -- 5'/Transfer assets --> OA

Functions

FunctionParametersDescription
WithdrawLiquidity-Withdraw liquidity from the configured Astroport Pool from the input account and deposit the withdrawed tokens into the configured output account

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account holding the LP position
    pub input_addr: LibraryAccountType,
    // Account to which the withdrawn assets are forwarded
    pub output_addr: LibraryAccountType,
    // Pool address
    pub pool_addr: String,
    // Liquidity withdrawer configuration
    pub withdrawer_config: LiquidityWithdrawerConfig,
}

pub struct LiquidityWithdrawerConfig {
    // Pool type, old Astroport pools use Cw20 lp tokens and new pools use native tokens, so we specify here what kind of token we are will use.
    // We also provide the PairType structure of the right Astroport version that we are going to use for each scenario
    pub pool_type: PoolType,
}

pub enum PoolType {
    NativeLpToken,
    Cw20LpToken,
}
}

Valence Forwarder library

The Valence Forwarder library allows to continuously forward funds from an input account to an output account, following some time constraints. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

High-level flow

---
title: Forwarder Library
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Forwarder
    Library]
  P -- 1/Forward --> S
  S -- 2/Query balances --> IA
  S -- 3/Do Send funds --> IA
  IA -- 4/Send funds --> OA

Functions

FunctionParametersDescription
Forward-Forward funds from the configured input account to the output account, according to the forwarding configs & constraints.

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are pulled
    pub input_addr: LibraryAccountType,
    // Account to which the funds are sent
    pub output_addr: LibraryAccountType,
    // Forwarding configuration per denom
    pub forwarding_configs: Vec<UncheckedForwardingConfig>,
    // Constraints on forwarding operations
    pub forwarding_constraints: ForwardingConstraints,
}

pub struct UncheckedForwardingConfig {
    // Denom to be forwarded (either native or CW20)
    pub denom: UncheckedDenom,
    // Max amount of tokens to be transferred per Forward operation
    pub max_amount: Uint128,
}

// Time constraints on forwarding operations
pub struct ForwardingConstraints {
    // Minimum interval between 2 successive forward operations,
    // specified either as a number of blocks, or as a time delta.
    min_interval: Option<Duration>,
}
}

Valence Generic IBC Transfer library

The Valence Generic IBC Transfer library allows to transfer funds over IBC from an input account on a source chain to an output account on a destination chain. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

Note: this library should not be used on Neutron, which requires some fees to be paid to relayers for IBC transfers. For Neutron, prefer using the dedicated (and optimized) Neutron IBC Transfer library instead.

High-level flow

---
title: Generic IBC Transfer Library
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Gen IBC Transfer
    Library]
  subgraph Chain 1
  P -- 1/IbcTransfer --> S
  S -- 2/Query balances --> IA
  S -- 3/Do Send funds --> IA
  end
  subgraph Chain 2
  IA -- 4/IBC Transfer --> OA
  end

Functions

FunctionParametersDescription
IbcTransfer-Transfer funds over IBC from an input account on a source chain to an output account on a destination chain.

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
struct LibraryConfig {
  // Account from which the funds are pulled (on the source chain)
  input_addr: LibraryAccountType,
  // Account to which the funds are sent (on the destination chain)
  output_addr: String,
  // Denom of the token to transfer
  denom: UncheckedDenom,
  // Amount to be transferred, either a fixed amount or the whole available balance.
  amount: IbcTransferAmount,
  // Memo to be passed in the IBC transfer message.
  memo: String,
  // Information about the destination chain.
  remote_chain_info: RemoteChainInfo,
  // Denom map for the Packet-Forwarding Middleware, to perform a multi-hop transfer.
  denom_to_pfm_map: BTreeMap<String, PacketForwardMiddlewareConfig>,
}

// Defines the amount to be transferred, either a fixed amount or the whole available balance.
enum IbcTransferAmount {
  // Transfer the full available balance of the input account.
  FullAmount,
  // Transfer the specified amount of tokens.
  FixedAmount(Uint128),
}

pub struct RemoteChainInfo {
  // Channel of the IBC connection to be used.
  channel_id: String,
  // Port of  the IBC connection to be used.
  port_id: Option<String>,
  // Timeout for the IBC transfer.
  ibc_transfer_timeout: Option<Uint64>,
}

// Configuration for a multi-hop transfer using the Packet Forwarding Middleware
struct PacketForwardMiddlewareConfig {
  // Channel ID from the source chain to the intermediate chain
  local_to_hop_chain_channel_id: String,
  // Channel ID from the intermediate to the destination chain
  hop_to_destination_chain_channel_id: String,
  // Temporary receiver address on the intermediate chain
  hop_chain_receiver_address: String,
}
}

Valence Neutron IBC Transfer library

The Valence Neutron IBC Transfer library allows to transfer funds over IBC from an input account on Neutron to an output account on a destination chain. It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

Note: this library should not be used on another CosmWasm chain than Neutron, which requires some fees to be paid to relayers for IBC transfers. For other CosmWasm chains, prefer using the Generic IBC Transfer library instead.

High-level flow

---
title: Neutron IBC Transfer Library
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Neutron IBC Transfer
    Library]
  subgraph Neutron
  P -- 1/IbcTransfer --> S
  S -- 2/Query balances --> IA
  S -- 3/Do Send funds --> IA
  end
  subgraph Chain 2
  IA -- 4/IBC Transfer --> OA
  end

Functions

FunctionParametersDescription
IbcTransfer-Transfer funds over IBC from an input account on Neutron to an output account on a destination chain.

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
struct LibraryConfig {
  // Account from which the funds are pulled (on the source chain)
  input_addr: LibraryAccountType,
  // Account to which the funds are sent (on the destination chain)
  output_addr: String,
  // Denom of the token to transfer
  denom: UncheckedDenom,
  // Amount to be transferred, either a fixed amount or the whole available balance.
  amount: IbcTransferAmount,
  // Memo to be passed in the IBC transfer message.
  memo: String,
  // Information about the destination chain.
  remote_chain_info: RemoteChainInfo,
  // Denom map for the Packet-Forwarding Middleware, to perform a multi-hop transfer.
  denom_to_pfm_map: BTreeMap<String, PacketForwardMiddlewareConfig>,
}

// Defines the amount to be transferred, either a fixed amount or the whole available balance.
enum IbcTransferAmount {
  // Transfer the full available balance of the input account.
  FullAmount,
  // Transfer the specified amount of tokens.
  FixedAmount(Uint128),
}

pub struct RemoteChainInfo {
  // Channel of the IBC connection to be used.
  channel_id: String,
  // Port of  the IBC connection to be used.
  port_id: Option<String>,
  // Timeout for the IBC transfer.
  ibc_transfer_timeout: Option<Uint64>,
}

// Configuration for a multi-hop transfer using the Packet Forwarding Middleware
struct PacketForwardMiddlewareConfig {
  // Channel ID from the source chain to the intermediate chain
  local_to_hop_chain_channel_id: String,
  // Channel ID from the intermediate to the destination chain
  hop_to_destination_chain_channel_id: String,
  // Temporary receiver address on the intermediate chain
  hop_chain_receiver_address: String,
}
}

Osmosis CL LPer library

The Valence Osmosis CL LPer library library allows to create concentrated liquidity positions on Osmosis from an input account, and deposit the LP tokens into an output account.

High-level flow

---
title: Osmosis CL Liquidity Provider
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Osmosis CL
      Liquidity
      Provider]
  AP[Osmosis CL
     Pool]
  P -- 1/Provide Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Configure target
    range --> S
  S -- 4/Do Provide Liquidity --> IA
  IA -- 5/Provide Liquidity
				  [Tokens] --> AP
  AP -- 5'/Transfer LP Tokens --> OA

Concentrated Liquidity Position creation

Because of the way CL positions are created, there are two ways to achieve it:

Default

Default position creation centers around the idea of creating a position with respect to the currently active tick of the pool.

This method expects a single parameter, bucket_amount, which describes how many buckets of the pool should be taken into account to both sides of the price curve.

Consider a situation where the current tick is 125, and the configured tick spacing is 10.

If this method is called with bucket_amount set to 5, the following logic will be performed:

  • find the current bucket range, which is 120 to 130
  • extend the current bucket ranges by 5 buckets to both sides, meaning that the range "to the left" will be extended by 5 * 10 = 50, and the range "to the right" will be extended by 5 * 10 = 50, resulting in the covered range from 120 - 50 = 70 to 130 + 50 = 180, giving the position tick range of (70, 180).

Custom

Custom position creation allows for more fine-grained control over the way the position is created.

This approach expects users to specify the following parameters:

  • tick_range, which describes the price range to be covered
  • token_min_amount_0 and token_min_amount_1 which are optional parameters that describe the minimum amount of tokens that should be provided to the pool.

With this flexibility a wide variety of positions can be created, such as those that are entirely single-sided.

Functions

FunctionParametersDescription
ProvideLiquidityDefaultbucket_amount: Uint64Create a position on the pre-configured Osmosis Pool from the input account, following the Default approach described above, and deposit the LP tokens into the output account.
ProvideLiquidityCustomtick_range: TickRange
token_min_amount_0: Option<Uint128>
token_min_amount_1: Option<Uint128>
Create a position on the pre-configured Osmosis Pool from the input account, following the Custom approach described above, and deposit the LP tokens into the output account.

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are LPed
    pub input_addr: LibraryAccountType,
    // Account to which the LP position is forwarded
    pub output_addr: LibraryAccountType,
    // LP configuration
    pub lp_config: LiquidityProviderConfig,
}

pub struct LiquidityProviderConfig {
    // ID of the Osmosis CL pool
    pub pool_id: Uint64,
    // Pool asset 1 
    pub pool_asset_1: String,
    // Pool asset 2
    pub pool_asset_2: String,
    // Pool global price range
    pub global_tick_range: TickRange,
}
}

Osmosis CL liquidity withdrawer library

The Valence Osmosis CL Withdrawer library library allows to withdraw a concentrated liquidity position off an Osmosis pool from an input account, and transfer the resulting tokens to an output account.

High-level flow

---
title: Osmosis CL Liquidity Withdrawal
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Osmosis CL
      Liquidity
      Withdrawal]
  AP[Osmosis CL
     Pool]
  P -- 1/Withdraw Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Withdraw Liquidity --> IA
  IA -- 5/Withdraw Liquidity
				  [LP Position] --> AP
  AP -- 5'/Transfer assets --> OA

Functions

FunctionParametersDescription
WithdrawLiquidityposition_id: Uint64
liquidity_amount: String
Withdraw liquidity from the configured Osmosis Pool from the input account, according to the given parameters, and transfer the withdrawned tokens to the configured output account

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account holding the LP position
    pub input_addr: LibraryAccountType,
    // Account to which the withdrawn assets are forwarded
    pub output_addr: LibraryAccountType,
    // ID of the pool
    pub pool_id: Uint64,
}
}

Osmosis GAMM LPer library

The Valence Osmosis GAMM LPer library library allows to join a pool on Osmosis, using the GAMM module (Generalized Automated Market Maker), from an input account, and deposit the LP tokens into an output account.

High-level flow

---
title: Osmosis GAMM Liquidity Provider
---
graph LR
  IA((Input
      Account))
  OA((Output
          Account))
  P[Processor]
  S[Osmosis GAMM
      Liquidity
      Provider]
  AP[Osmosis
     Pool]
  P -- 1/Join Pool --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Join Pool --> IA
  IA -- 5/Join Pool
                  [Tokens] --> AP
  AP -- 5'/Transfer LP tokens --> OA

Functions

FunctionParametersDescription
ProvideDoubleSidedLiquidityexpected_spot_price: Option<DecimalRange>Provide double-sided liquidity to the pre-configured Osmosis Pool from the input account, and deposit the LP tokens into the output account. Abort it the spot price is not within the expected_spot_price range (if specified).
ProvideSingleSidedLiquidityasset: String
limit: Option<Uint128>
expected_spot_price: Option<DecimalRange>
Provide single-sided liquidity for the specified asset to the pre-configured Osmosis Pool from the input account, and deposit the LP tokens into the output account. Abort it the spot price is not within the expected_spot_price range (if specified).

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are LPed
    pub input_addr: LibraryAccountType,
    // Account to which the LP position is forwarded
    pub output_addr: LibraryAccountType,
    // LP configuration
    pub lp_config: LiquidityProviderConfig,
}

pub struct LiquidityProviderConfig {
    // ID of the Osmosis pool
    pub pool_id: Uint64,
    // Pool asset 1 
    pub pool_asset_1: String,
    // Pool asset 2
    pub pool_asset_2: String,
}
}

Osmosis GAMM liquidity withdrawer library

The Valence Osmosis GAMM Withdrawer library library allows to exit a pool on Osmosis, using the GAMM module (Generalized Automated Market Maker), from an input account, an deposit the withdrawed tokens into an output account.

High-level flow

---
title: Osmosis GAMM Liquidity Withdrawal
---
graph LR
  IA((Input
      Account))
  OA((Output
		  Account))
  P[Processor]
  S[Osmosis GAMM
      Liquidity
      Withdrawal]
  AP[Osmosis
     Pool]
  P -- 1/Withdraw Liquidity --> S
  S -- 2/Query balances --> IA
  S -- 3/Compute amounts --> S
  S -- 4/Do Withdraw Liquidity --> IA
  IA -- 5/Withdraw Liquidity
				  [LP Tokens] --> AP
  AP -- 5'/Transfer assets --> OA

Functions

FunctionParametersDescription
WithdrawLiquidity-Withdraw liquidity from the configured Osmosis Pool from the input account and deposit the withdrawed tokens into the configured output account

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
pub struct LibraryConfig {
    // Account from which the funds are LPed
    pub input_addr: LibraryAccountType,
    // Account to which the LP tokens are forwarded
    pub output_addr: LibraryAccountType,
    // Liquidity withdrawer configuration
    pub withdrawer_config: LiquidityWithdrawerConfig,
}

pub struct LiquidityWithdrawerConfig {
    // ID of the pool
    pub pool_id: Uint64,
}
}

Valence Reverse Splitter library

The Reverse Splitter library allows to route funds from one or more input account(s) to a single output account, for one or more token denom(s) according to the configured ratio(s). It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

High-level flow

---
title: Reverse Splitter Library
---
graph LR
  IA1((Input
      Account1))
  IA2((Input
       Account2))
  OA((Output
		  Account))
  P[Processor]
  S[Reverse Splitter
    Library]
  C[Contract]
  P -- 1/Split --> S
  S -- 2/Query balances --> IA1
  S -- 2'/Query balances --> IA2
  S -. 3/Query split ratio .-> C
  S -- 4/Do Send funds --> IA1
  S -- 4'/Do Send funds --> IA2
  IA1 -- 5/Send funds --> OA
  IA2 -- 5'/Send funds --> OA

Functions

FunctionParametersDescription
Split-Split and route funds from the configured input account(s) to the output account, according to the configured token denom(s) and ratio(s).

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
struct LibraryConfig {
    output_addr: LibraryAccountType,   // Account to which the funds are sent.
    splits: Vec<UncheckedSplitConfig>, // Split configuration per denom.
    base_denom: UncheckedDenom         // Base denom, used with ratios.
}

// Split config for specified account
struct UncheckedSplitConfig {
  denom: UncheckedDenom,                // Denom for this split configuration (either native or CW20).
  account: LibraryAccountType,          // Address of the input account for this split config.
  amount: UncheckedSplitAmount,         // Fixed amount of tokens or an amount defined based on a ratio.
  factor: Option<u64>                   // Multiplier relative to other denoms (only used if a ratio is specified).
}

// Ratio configuration, either fixed & dynamically calculated
enum UncheckedRatioConfig {
  FixedAmount(Uint128), // Fixed amount of tokens
  FixedRatio(Decimal),  // Fixed ratio e.g. 0.0262 for NTRN/STARS (or could be another arbitrary ratio)
  DynamicRatio {        // Dynamic ratio calculation (delegated to external contract)
	contract_addr: "<TWAP Oracle wrapper contract address>",
    params: "base64-encoded arbitrary payload to send in addition to the denoms"
  }
}

// Standard query & response for contract computing a dynamic ratio
// for the Splitter & Reverse Splitter libraries.
#[cw_serde]
#[derive(QueryResponses)]
pub enum DynamicRatioQueryMsg {
    #[returns(DynamicRatioResponse)]
    DynamicRatio {
        denoms: Vec<String>,
        params: String,
    }
}

#[cw_serde]
// Response returned by the external contract for a dynamic ratio
struct DynamicRatioResponse {
    pub denom_ratios: HashMap<String, Decimal>,
}
}

Valence Splitter library

The Valence Splitter library allows to split funds from one input account to one or more output account(s), for one or more token denom(s) according to the configured ratio(s). It is typically used as part of a Valence Program. In that context, a Processor contract will be the main contract interacting with the Forwarder library.

High-level flow

---
title: Splitter Library
---
graph LR
  IA((Input
      Account))
  OA1((Output
		  Account 1))
	OA2((Output
		  Account 2))
  P[Processor]
  S[Splitter
    Library]
  C[Contract]
  P -- 1/Split --> S
  S -- 2/Query balances --> IA
  S -. 3/Query split ratio .-> C
  S -- 4/Do Send funds --> IA
  IA -- 5/Send funds --> OA1
  IA -- 5'/Send funds --> OA2

Functions

FunctionParametersDescription
Split-Split funds from the configured input account to the output account(s), according to the configured token denom(s) and ratio(s).

Configuration

The library is configured on instantiation via the LibraryConfig type.

#![allow(unused)]
fn main() {
struct LibraryConfig {
    input_addr: LibraryAccountType,    // Address of the input account
    splits: Vec<UncheckedSplitConfig>, // Split configuration per denom
}

// Split config for specified account
struct UncheckedSplitConfig {
  denom: UncheckedDenom,          // Denom for this split configuration (either native or CW20)
  account: LibraryAccountType,    // Address of the output account for this split config
  amount: UncheckedSplitAmount,   // Fixed amount of tokens or an amount defined based on a ratio
}

// Split amount configuration, either a fixed amount of tokens or an amount defined based on a ratio
enum UncheckedSplitAmount {
  FixedAmount(Uint128),       // Fixed amount of tokens
  FixedRatio(Decimal),        // Fixed ratio e.g. 0.0262 for NTRN/STARS (or could be another arbitrary ratio)
  DynamicRatio {              // Dynamic ratio calculation (delegated to external contract)
    contract_addr: "<TWAP Oracle wrapper contract address>",
    params: "base64-encoded arbitrary payload to send in addition to the denoms"
  }
}

// Standard query & response for contract computing a dynamic ratio
// for the Splitter & Reverse Splitter libraries.
#[cw_serde]
#[derive(QueryResponses)]
pub enum DynamicRatioQueryMsg {
    #[returns(DynamicRatioResponse)]
    DynamicRatio {
        denoms: Vec<String>,
        params: String,
    }
}

#[cw_serde]
// Response returned by the external contract for a dynamic ratio
struct DynamicRatioResponse {
    pub denom_ratios: HashMap<String, Decimal>,
}
}

Middleware

This section contains a description of the Valence Protocol middleware design.

Valence Protocol Middleware components:

Middleware Broker

Middleware broker acts as an app-level integration gateway in Valence Programs. Integration here is used rather ambiguously on purpose - brokers should remain agnostic to the primitives being integrated into Valence Protocol. These primitives may involve but not be limited to:

  • data types
  • functions
  • encoding schemes
  • any other distributed system building blocks that may be implemented differently

Problem statement

Valence Programs can be configured to span over multiple domains and last for an indefinite duration of time.

Domains integrated into Valence Protocol are sovereign and evolve on their own.

Middleware brokers provide the means to live with these differences by enabling various primitive conversions to be as seamless as possible. Seamless here primarily refers to causing no downtime to bring a given primitive up-to-date, and making the process of doing so as easy as possible for the developers.

To visualize a rather complex instance of this problem, consider the following situation. A Valence Program is initialized to continuously query a particular type from a remote domain, modify some of its values, and send the altered object back to the remote domain for further actions. At some point during the runtime, remote domain performs an upgrade which extends the given type with additional fields. The Valence Program is unaware of this upgrade and continues with its order of operations. However, the type in question from the perspective of the Valence Program had drifted and is no longer representative of its origin domain.

Among other things, Middleware brokers should enable such programs to gracefully recover into a synchronized state that can continue operating in a correct manner.

Broker Lifecycle

Brokers are singleton components that are instantiated before the program start time.

Valence Programs refer to their brokers of choice by their respective addresses.

This means that the same broker instance for a particular domain could be used across many Valence Programs.

Brokers maintain their set of type registries and index them by semver. New type registries can be added to the broker during runtime. While programs have the freedom to select a particular version of a type registry to be used for a given request, by default, the most up to date type registry is used.

Two aforementioned properties reduce the amount of work needed to upkeep the integrations across active Valence Programs: updating one broker with the latest version of a given domain will immediately become available for all Valence Programs using it.

API

Broker interface is agnostic to the type registries it indexes. A single query is exposed:

#![allow(unused)]
fn main() {
pub struct QueryMsg {
    pub registry_version: Option<String>,
    pub query: RegistryQueryMsg,
}
}

This query message should only change in situations where it may become limiting.

After receiving the query request, broker will relay the contained RegistryQueryMsg to the correct type registry, and return the result to the caller.

Middleware Type Registry

Middleware type registries are static components that define how primitives external to the Valence Protocol are adapted to be used within Valence programs.

While type registries can be used independently, they are typically meant to be registered into and used via brokers to ensure versioning is kept up to date.

Type Registry lifecycle

Type Registries are static contracts that define their primitives during compile time.

Once a registry is deployed, it is expected to remain unchanged. If a type change is needed, a new registry should be compiled, deployed, and registered into the broker to offer the missing or updated functionality.

API

All type registry instances must implement the same interface defined in middleware-utils.

Type registries function in a read-only manner - all of their functionality is exposed with the RegistryQueryMsg. Currently, the following primitive conversions are enabled:

#![allow(unused)]
fn main() {
pub enum RegistryQueryMsg {
    /// serialize a message to binary
    #[returns(NativeTypeWrapper)]
    FromCanonical { obj: ValenceType },
    /// deserialize a message from binary/bytes
    #[returns(Binary)]
    ToCanonical { type_url: String, binary: Binary },

    /// get the kvkey used for registering an interchain query
    #[returns(KVKey)]
    KVKey {
        type_id: String,
        params: BTreeMap<String, Binary>,
    },

    #[returns(NativeTypeWrapper)]
    ReconstructProto {
        type_id: String,
        icq_result: InterchainQueryResult,
    },
}
}

RegistryQueryMsg can be seen as the superset of all primitives that Valence Programs can expect. No particular type being integrated into the system is required to implement all available functionality, although that is possible.

To maintain a unified interface across all type registries, they have to adhere to the same API as all other type registries. This means that if a particular type is enabled in a type registry and only provides the means to perform native <-> canonical conversion, attempting to call ReconstructProto on that type will return an error stating that reconstructing protobuf for this type is not enabled.

Module organization

Primitives defined in type registries should be outlined in a domain-driven manner. Types, encodings, and any other functionality should be grouped by their domain and are expected to be self-contained, not leaking into other primitives.

For instance, an osmosis type registry is expected to contain all registry instances related to the Osmosis domain. Different registry instances should be versioned by semver, following that of the external domain of which the primitives are being integrated.

Enabled primitives

Currently, the following type registry primitives are enabled:

  • Neutron Interchain Query types:
    • reconstructing native types from protobuf
    • obtaining the KVKey used to initiate the query for a given type
  • Valence Canonical Types:
    • reconstructing native types from Valence Types
    • mapping native types into Valence Types

Example integration

For an example, consider the integration of the osmosis gamm pool.

Neutron Interchain Query integration

Neutron Interchain Query integration for a given type is achieved by implementing the IcqIntegration trait:

#![allow(unused)]
fn main() {
pub trait IcqIntegration {
    fn get_kv_key(params: BTreeMap<String, Binary>) -> Result<KVKey, MiddlewareError>;
    fn decode_and_reconstruct(
        query_id: String,
        icq_result: InterchainQueryResult,
    ) -> Result<Binary, MiddlewareError>;
}
}

get_kv_key

Implementing the get_kv_key will provide the means to obtain the KVKey needed to register the interchain query. For osmosis gamm pool, the implementation may look like this:

#![allow(unused)]

fn main() {
impl IcqIntegration for OsmosisXykPool {
    fn get_kv_key(params: BTreeMap<String, Binary>) -> Result<KVKey, MiddlewareError> {
        let pool_prefix_key: u8 = 0x02;

        let id: u64 = try_unpack_domain_specific_value("pool_id", &params)?;

        let mut pool_access_key = vec![pool_prefix_key];
        pool_access_key.extend_from_slice(&id.to_be_bytes());

        Ok(KVKey {
            path: STORAGE_PREFIX.to_string(),
            key: Binary::new(pool_access_key),
        })
    }
}
}

decode_and_reconstruct

Other part of enabling interchain queries is the implementation of decode_and_reconstruct. This method will be called upon ICQ relayer posting the query result back to the interchainqueries module on Neutron. For osmosis gamm pool, the implementation may look like this:

#![allow(unused)]
fn main() {
impl IcqIntegration for OsmosisXykPool {
    fn decode_and_reconstruct(
        _query_id: String,
        icq_result: InterchainQueryResult,
    ) -> Result<Binary, MiddlewareError> {
        let any_msg: Any = Any::decode(icq_result.kv_results[0].value.as_slice())
            .map_err(|e| MiddlewareError::DecodeError(e.to_string()))?;

        let osmo_pool: Pool = any_msg
            .try_into()
            .map_err(|_| StdError::generic_err("failed to parse into pool"))?;

        to_json_binary(&osmo_pool)
            .map_err(StdError::from)
            .map_err(MiddlewareError::Std)
    }
}
}

Valence Type integration

Valence Type integration for a given type is achieved by implementing the ValenceTypeAdapter trait:

#![allow(unused)]
fn main() {
pub trait ValenceTypeAdapter {
    type External;

    fn try_to_canonical(&self) -> Result<ValenceType, MiddlewareError>;
    fn try_from_canonical(canonical: ValenceType) -> Result<Self::External, MiddlewareError>;
}
}

Ideally, Valence Types should represent the minimal amount of information needed and avoid any domain-specific logic or identifiers. In practice, this is a hard problem: native types that are mapped into Valence types may need to be sent back to the remote domains. For that reason, we cannot afford leaking any domain-specific fields and instead store them in the Valence Type itself for later reconstruction.

In case of ValenceXykPool, this storage is kept in its domain_specific_fields field. Any fields that are logically common across all possible integrations into this type should be kept in their dedicated fields. In the case of constant product pools, such fields are the assets in the pool, and the shares issued that represent those assets:

#![allow(unused)]
fn main() {
#[cw_serde]
pub struct ValenceXykPool {
    /// assets in the pool
    pub assets: Vec<Coin>,

    /// total amount of shares issued
    pub total_shares: String,

    /// any other fields that are unique to the external pool type
    /// being represented by this struct
    pub domain_specific_fields: BTreeMap<String, Binary>,
}
}

try_to_canonical

Implementing the try_from_canonical will provide the means of mapping a native remote type into the canonical Valence Type to be used in Valence Protocol. For osmosis gamm pool, the implementation may look like this:

#![allow(unused)]
fn main() {
impl ValenceTypeAdapter for OsmosisXykPool {
    type External = Pool;

    fn try_to_canonical(&self) -> Result<ValenceType, MiddlewareError> {
        // pack all the domain-specific fields
        let mut domain_specific_fields = BTreeMap::from([
            (ADDRESS_KEY.to_string(), to_json_binary(&self.0.address)?),
            (ID_KEY.to_string(), to_json_binary(&self.0.id)?),
            (
                FUTURE_POOL_GOVERNOR_KEY.to_string(),
                to_json_binary(&self.0.future_pool_governor)?,
            ),
            (
                TOTAL_WEIGHT_KEY.to_string(),
                to_json_binary(&self.0.total_weight)?,
            ),
            (
                POOL_PARAMS_KEY.to_string(),
                to_json_binary(&self.0.pool_params)?,
            ),
        ]);

        if let Some(shares) = &self.0.total_shares {
            domain_specific_fields
                .insert(SHARES_DENOM_KEY.to_string(), to_json_binary(&shares.denom)?);
        }

        for asset in &self.0.pool_assets {
            if let Some(token) = &asset.token {
                domain_specific_fields.insert(
                    format!("pool_asset_{}_weight", token.denom),
                    to_json_binary(&asset.weight)?,
                );
            }
        }

        let mut assets = vec![];
        for asset in &self.0.pool_assets {
            if let Some(t) = &asset.token {
                assets.push(coin(u128::from_str(&t.amount)?, t.denom.to_string()));
            }
        }

        let total_shares = self
            .0
            .total_shares
            .clone()
            .map(|shares| shares.amount)
            .unwrap_or_default();

        Ok(ValenceType::XykPool(ValenceXykPool {
            assets,
            total_shares,
            domain_specific_fields,
        }))
    }
}
}

try_from_canonical

Other part of enabling Valence Type integration is the implementation of try_from_canonical. This method will be called when converting from canonical back to the native version of the types. For osmosis gamm pool, the implementation may look like this:

#![allow(unused)]
fn main() {
impl ValenceTypeAdapter for OsmosisXykPool {
    type External = Pool;

    fn try_from_canonical(canonical: ValenceType) -> Result<Self::External, MiddlewareError> {
        let inner = match canonical {
            ValenceType::XykPool(pool) => pool,
            _ => {
                return Err(MiddlewareError::CanonicalConversionError(
                    "canonical inner type mismatch".to_string(),
                ))
            }
        };
        // unpack domain specific fields from inner type
        let address: String = inner.get_domain_specific_field(ADDRESS_KEY)?;
        let id: u64 = inner.get_domain_specific_field(ID_KEY)?;
        let future_pool_governor: String =
            inner.get_domain_specific_field(FUTURE_POOL_GOVERNOR_KEY)?;
        let pool_params: Option<PoolParams> = inner.get_domain_specific_field(POOL_PARAMS_KEY)?;
        let shares_denom: String = inner.get_domain_specific_field(SHARES_DENOM_KEY)?;
        let total_weight: String = inner.get_domain_specific_field(TOTAL_WEIGHT_KEY)?;

        // unpack the pool assets
        let mut pool_assets = vec![];
        for asset in &inner.assets {
            let pool_asset = PoolAsset {
                token: Some(Coin {
                    denom: asset.denom.to_string(),
                    amount: asset.amount.into(),
                }),
                weight: inner
                    .get_domain_specific_field(&format!("pool_asset_{}_weight", asset.denom))?,
            };
            pool_assets.push(pool_asset);
        }

        Ok(Pool {
            address,
            id,
            pool_params,
            future_pool_governor,
            total_shares: Some(Coin {
                denom: shares_denom,
                amount: inner.total_shares,
            }),
            pool_assets,
            total_weight,
        })
    }
}
}

Valence Types

Valence Types are a set of canonical type wrappers to be used inside Valence Programs.

Primary operational domain of Valence Protocol will need to consume, interpret, and otherwise manipulate data from external domains. For that reason, canonical representations of such types are defined in order to form an abstraction layer that all Valence Programs can reason about.

Canonical Type integrations

Canonical types to be used in Valence Programs are enabled by the Valence Protocol.

For instance, consider Astroport XYK and Osmosis GAMM pool types. These are two distinct data types that represent the same underlying concept - a constant product pool.

These types can be unified in the Valence Protocol context by being mapped to and from the following Valence Type definition:

#![allow(unused)]
fn main() {
pub struct ValenceXykPool {
    /// assets in the pool
    pub assets: Vec<Coin>,

    /// total amount of shares issued
    pub total_shares: String,

    /// any other fields that are unique to the external pool type
    /// being represented by this struct
    pub domain_specific_fields: BTreeMap<String, Binary>,
}
}

For a remote type to be integrated into the Valence Protocol means that there are available adapters that map between the canonical and original type definitions.

These adapters can be implemented by following the design outlined by type registries.

Active Valence Types

Active Valence types provide the interface for integrating remote domain representations of the same underlying concepts. Remote types can be integrated into Valence Protocol if and only if there is an enabled Valence Type representing the same underlying primitive.

Currently enabled Valence types are:

  • XYK pool
  • Balance response

Examples

Here are some examples of Valence Programs that you can use to get started.

Token Swap Program

This example demonstrates a simple token swap program whereby two parties wish to exchange specific amounts of (different) tokens they each hold, at a rate they have previously agreed on. The program ensures the swap happens atomically, so neither party can withdraw without completing the trade.

---
title: Valence token swap program
---
graph LR
	InA((Party A Deposit))
	InB((Party B Deposit))
	OutA((Party A Withdraw))
	OutB((Party B Withdraw))
	SSA[Splitter A]
	SSB[Splitter B]
	subgraph Neutron
	InA --> SSA --> OutB
	InB --> SSB --> OutA
	end

The program is composed of the following components:

  • Party A Deposit account: a Valence account which Party A will deposit their tokens into, to be exchanged with Party B's tokens.
  • Splitter A: an instance of the Splitter library that will transfer Party A's tokens from its input account (i.e. the Party A Deposit account) to its output account (i.e. the Party B Withdraw account) upon execution of its split function.
  • Party B Withdraw account: the account from which Party B can withdraw Party A's tokens after the swap has successfully completed. Note: this can be a Valence account, but it could also be a regular chain account, or a smart contract.
  • Party B Deposit account: a Valence account which Party B will deposit their funds into, to be exchanged with Party A's funds.
  • Splitter B: an instance of the Splitter library that will transfer Party B's tokens from its input account (i.e. the Party B Deposit account) to its output account (i.e. the Party A Withdraw account) upon execution of its split function.
  • Party A Withdraw account: the account from which Party A can withdraw Party B's tokens after the swap has successfully completed. Note: this can be a Valence account, but it could also be a regular chain account, or a smart contract.

The way the program is able to fulfil the requirement for an atomic exchange of tokens between the two parties is done by implementing an atomic subroutine composed of two function calls:

  1. Splitter A's split function
  2. Splitter B's split function

The Authorizations component will ensure that either both succeed, or none is executed, thereby ensuring that funds remain safe at all time (either remaining in the respective deposit accounts, or transferred to the respective withdraw accounts).

Crosschain Vaults

Note: This example is still in the design phase and includes new or experimental features of Valence Programs that may not be supported in the current production release.

Overview

You can use Valence Programs to create crosschain vaults. Users interact with a vault on one chain while the tokens are held on another chain where yield is generated.

Note: In our initial implementation we use Neutron for co-processing and Hyperlane for general message passing between the co-processor and the target domain. Deployment of Valence programs as zk RISC-V co-processors with permissionless message passing will be available in the coming months.

In this example, we have made the following assumptions:

  • Users can deposit tokens into a standard ERC-4626 vault on Ethereum.
  • ERC-20 shares are issued to users on Ethereum.
  • If a user wishes to redeem their tokens, they can issue a withdrawal request which will burn the user's shares when tokens are redeemed.
  • The redemption rate that tells us how many tokens can be redeemed per shares is given by: \( R = \frac{TotalAssets}{TotalIssuedShares} = \frac{TotalInVault + TotalInTransit + TotalInPostion}{TotalIssuedShares}\)
  • A permissioned actor called the "Strategist" is authorized to transport funds from Ethereum to Neutron where they are locked in some DeFi protocol. And vice-versa, the Strategist can withdraw from the position so the funds are redeemable on Ethereum. The redemption rate must be adjusted by the Strategist accordingly.
---
title: Crosschain Vaults Overview
---
graph LR
	User
	EV(Ethereum Vault)
	NP(Neutron Position)

	User -- Tokens --> EV
	EV -- Shares --> User
	EV -- Strategist Transport --> NP
	NP -- Strategist Transport --> EV

While we have chosen Ethereum and Neutron as examples here, one could similarly construct such vaults between any two chains as long as they are supported by Valence Programs.

Implementing Crosschain Vaults as a Valence Program

Recall that Valence Programs are comprised of Libraries and Accounts. Libraries are a collection of Functions that perform token oprations on the Accounts. Since there are two chains here, Libraries and Accounts will exist on both chains.

Since gas is cheaper on Neutron than on Ethereum, computationally expensive operations, such as constraining the Strategist actions will be done on Neutron. Authorized messages will then be executed by each chain's Processor. Hyperlane is used to pass messages from the Authorization contract on Neutron to the Processor on Ethereum.

---
title: Program Control
---
graph BT
	Strategist
	subgraph Ethereum
		EP(Processor)
		EHM(Hyperlane Mailbox)
		EL(Ethereum Valence Libraries)
		EVA(Valence Accounts)
	end
	subgraph Neutron
		A(Authorizations)
		NP(Processor)
		EE(EVM Encoder)
		NHM(Hyperlane Mailbox)
		NL(Neutron Valence Libraries)
		NVA(Valence Accounts)
	end

	Strategist --> A
	A --> EE --> NHM --> Relayer --> EHM --> EP --> EL --> EVA
	A --> NP --> NL--> NVA

Libraries and Accounts needed

On Ethereum, we'll need Accounts for:

  • Deposit: To hold user deposited tokens. Tokens from this pool can be then transported to Neutron.
  • Withdraw: To hold tokens received from Neutron. Tokens from this pool can then be redeemed for shares.

On Neutron, we'll need Accounts for:

  • Deposit: To hold tokens bridged from Ethereum. Tokens from this pool can be used to enter into the position on Neutron.
  • Position: Will hold the vouchers or shares associated with the position on Neutron.
  • Withdraw: To hold the tokens that are withdrawn from the position. Tokens from this pool can be bridged back to Ethereum.

We'll need the following Libraries on Ethereum:

  • Bridge Transfer: To transfer funds from the Ethereum Deposit Account to the Neutron Deposit Account.
  • Forwarder: To transfer funds between the Deposit and Withdraw Accounts on Ethereum. Two instances of the Library will be required.

We'll need the following Libraries on Neutron:

  • Position Depositor: To take funds in the Deposit and create a position with them. The position is held by the Position account.
  • Position Withdrawer: To redeem a position for underlying funds that are then transferred to the Withdraw Account on Neutron.
  • Bridge Transfer: To transfer funds from the Neutron Withdraw Account to the Ethereum Withdraw Account.

Note that the Accounts mentioned here the standard Valence Accounts. Th Bridge Transfer library will depend on the token being transferred, but will offer similar functionality to the IBC Transfer library. The Position Depositor and Withdrawer will depend on the type of position, but can be similar to the Liqudity Provider and Liquidity Withdrawer.

Vault Contract

The Vault contract is a special contract on Ethereum that has an ERC-4626 interface.

User methods to deposit funds

  • Deposit: Deposit funds into the registered Deposit Account. Receive shares back based on the redemption rate.
    Deposit {
    	amount: Uint256,
    	receiver: String
    }
    
  • Mint: Mint shares from the vault. Expects the user to provide sufficient tokens to cover the cost of the shares based on the current redemption rate.
    Mint {
    	shares: Uint256,
    	receiver: String
    }
    
---
title: User Deposit and Share Mint Flow
---
graph LR
	User
	subgraph Ethereum
		direction LR
		EV(Vault)
		ED((Deposit))
	end
	
	User -- 1/ Deposit Tokens --> EV
	EV -- 2/ Send Shares --> User
	EV -- 3/ Send Tokens --> ED

User methods to withdraw funds

  • Redeem: Send shares to redeem assets. This creates a WithdrawRecord in a queue. This record is processed at the next Epoch
    Redeem {
    	shares: Uint256,
    	receiver: String,
    	max_loss_bps: u64
    }
    
  • Withdraw: Withdraw amount of assets. It expects the user to have sufficient shares. This creates a WithdrawRecord in a queue. This record is processed at the next Epoch.
    Withdraw {
    	amount: Uint256,
    	receiver: String,
    	max_loss_bps: u64
    }
    

Withdraws are subject to a lockup period after the user has initiated a redemption. During this time the redemption rate may change. Users can specify an acceptable loss in case the the redemption rate decreases using the max_loss_bps parameter.

After the Epoch has completed, a user may complete the withdrawal by executing the following message:

  • CompleteWithdraw: Pop the WithdrawRecord. Pull funds from the Withdraw Account and send to user. Burn the user's deposited shares.
---
title: User Withdraw Flow
---
graph RL
	subgraph Ethereum
		direction RL
		EV(Vault)
		EW((Withdraw))
	end
	EW -- 2/ Send Tokens --> EV -- 3/ Send Tokens --> User
	User -- 1/ Deposit Shares --> EV

Strategist methods to manage the vault

The vault validates that the Processor is making calls to it. On Neutron, the Authorization contract limits the calls to be made only by a trusted Strategist. The Authorization contract can further constrain when or how Strategist actions can be taken.

  • Update: The strategist can update the current redemption rate.
    Update {
      rate: Uint256
    }
    
  • Pause and Unpause: The strategist can pause and unpause vault operations.
    Pause {}
    

Program subroutines

The program authorizes the Strategist to update the redemption rate and transport funds between various Accounts.

Allowing the Strategist to transport funds

---
title: From Ethereum Deposit Account to Neutron Position Account
---
graph LR
	subgraph Ethereum
		ED((Deposit))
		ET(Bridge Transfer)
	end
	subgraph Neutron
		NPH((Position Holder))
		NPD(Position Depositor)
		ND((Deposit))
	end

	ED --> ET --> ND --> NPD --> NPH
---
title: From Neutron Position Account to Ethereum Withdraw Account
---
graph RL
	subgraph Ethereum
		EW((Withdraw))
	end
	subgraph Neutron
		NPH((Position Holder))
		NW((Widthdraw))
		NT(Bridge Transfer)
		NPW(Position Withdrawer)
	end

	NPH --> NPW --> NW --> NT --> EW

---
title: Between Ethereum Deposit and Ethereum Withdraw Accounts
---
graph
	subgraph Ethereum
		ED((Deposit))
		EW((Withdraw))
		FDW(Forwarder)
	end
	ED --> FDW --> EW

Design notes

This is a simplified design to demonstrate how a cross-chain vault can be implemented with Valence Programs. Production deployments will need to consider additional factors not covered here including:

  • Fees for gas, bridging, and for entering/exiting the position on Neutron. It is recommend that the vault impose withdraw fee and platform for users.
  • How to constrain Strategist behavior to ensure they set redemption rates correctly.