Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This constitution is a guide for the principles and values driving X1's vision for a decentralized, self-sovereign digital future.
All individuals have the right to communicate, transact, and interact freely in a decentralized, permissionless manner without interference and censorship from centralized authorities.
Every participant retains full control over their digital assets, data, and private information. No entity can seize or alter an individual’s property without their explicit consent.
Participants have the option to protect their personal information, activities, and financial interactions. This is achieved through encryption and other cryptographic methods, allowing individuals to maintain control over how much of their identity is revealed.
X1 advocates for open-source, verifiable smart contracts and cryptographic protocols where trust is embedded in technology rather than fallible human intermediaries.
X1 is committed to the continuous evolution of cryptographic methods, decentralized technology, and blockchain innovations, adapting to new challenges and fostering a future where freedom, privacy, and decentralization thrive.
Freedom to transact.
X1 blockchain is a high performance, high throughput, monolithic L1 with a mission to provide a decentralised, censorship-resistant multi-purpose infrastructure that empowers the freedom to transact with minimal technical and economic limitations.
X1 exponentially improves the economic model of Solana to scale capacity even further.
On X1, validators are not required to pay for votes. The zero-cost vote mechanism significantly lowers the barriers for new validators to join and participate in the consensus process. Consequently, running a validator becomes significantly more affordable, with costs limited to approximately 5 USD per day.
For just $5 a day, a validator can participate in the chain’s consensus and earn rewards, making X1 the lowest cost validator participation network.
On the other hand, validators have various ways to generate rewards through voting rewards, commissions from delegators, block rewards, and an incentivized bootstrap bonus program. Set up your own validator in just a few minutes.
X1 utilises congestion-reflective dynamic base fees. Base fees are not fixed, but rather calculated based on the reflection of the block space demand in terms of the computational resources the blockchain is currently experiencing. This model, akin to EIP-1559 on Ethereum, prevents the underpricing of transactions and spam, ensuring that transactions are fairly priced.
X1 blockchain is fully compatible with the Solana Virtual Machine (SVM), allowing to seamlessly deploy decentralised applications.
X1 launched mainnet on October 6th, 2025, and already has over 1,000 validators participating. To watch their live performance, visit .
X1 Blockchain will have zero-cost votes and congestion-reflective dynamic base fees implemented from mainnet launch.
Other scaling and performance improvements are listed and described in the technical roadmap and includes:
How to stop, start, and verify your X1 validator.
Perform encrypted computations
Homomorphic encryption is a type of encryption that allows computations to be performed on encrypted data without needing to decrypt it first. The result of these computations, when decrypted, matches the outcome of performing the same operations on the original unencrypted data. This property enables processing and analyzing data while preserving its confidentiality.
Homomorphic encryption is important because it allows computations to be performed on encrypted data without needing to decrypt it first. This capability is significant for several reasons:
1. Privacy Preservation: With homomorphic encryption, sensitive data can remain encrypted even while being processed. This is crucial in scenarios like cloud computing, where users might need to outsource computation to a third party but want to ensure that their data remains confidential.
See list of registered validators:
Set to you identity.json keypair:
Publish your validator information:
Note "-k" at the end which points to your identify.json including its path. This links your validator identity to the information you are providing. Your icon needs to be hosted.
Example:
Significantly lower the barrier to validate X1 blockchain
Vote costs on Solana present significant challenges to decentralization. At todays price of 250 USD/SOL, validators must pay approximately $8,600/month to participate in consensus, with 94% of this cost coming from voting fees. These fees create a high barrier to entry, discouraging smaller validators while disproportionately benefiting larger ones. Validators bear these costs to support the network, while top leaders profit from vote fees and block rewards, exacerbating centralization.
Arguments for vote fees include covering compute resources (2,100 CUs per vote) and deterring spam. However, these justifications are flawed. Votes are net-positive for the network, enhancing resilience and security. Taxing them discourages participation, ultimately weakening the system. As for spam, weighted votes require stake to count, making spam voting ineffective. Malicious actors could already spam at low cost, and true consensus attacks would still require 33% of the total stake, an unattainable threshold for most.
On the , a substantial cluster of nodes has demonstrated that removing vote costs is not only possible but also beneficial. Without these fees, validator costs drop significantly, enabling higher scalability and profitability. This approach lowers barriers to entry, encouraging more validators to join and strengthening decentralization. Removing vote costs aligns with the principle that anything above zero is a tax on network participants, in this case, validators.
Concerns about full blocks pushing out votes, seen during PoW on Solana, are also addressed. X1 implements congestion-reflective dynamic base fees, ensuring blocks are never full and eliminating the risk of votes being displaced. Even in rare cases where votes are delayed, the network remains secure, with consensus only postponed by one block.
Import keypair to backpack wallet
1. Download wallet extension
Copy private key
Go to folder in terminal where you have keypair
Add new Solana wallet>import private key
In wallet change to developer mode and X1 testnet: https://rpc.testnet.x1.xyz
Active validators on X1 Blockchain
The hardware recommendations below are provided as a guide. Operators are encouraged to do their own performance testing.
CPU
12 cores / 24 threads, or more
3GHz base clock speed, or faster
RAM
192GB or more
DISK
4TB NVME
Server needs to be bare metal dedicated, not VPS.

tachyon-validator exit -fnohup ~/bin/validator.sh &df -h
Get stake account:
Deactivate stake:
See un-staking progress:
Leave some XNT for unstaking fee.
Include the vote account of the one you want to stake to.
ps aux | grep tachyon-validator | grep -v grep
tail -f ~/validator.log
solana catchup ~/.config/solana/identity.json --our-localhost
solana validators | grep $(solana address -k ~/.config/solana/identity.json)solana transfer <STAKE_PUBKEY> 1000solana delegate-stake stake.json vote.jsonsolana stake-account stake.jsonsolana account stake.jsonsolana deactivate-stake <STAKE_PUBKEY>solana stake-account stake.jsonsolana withdraw-stake <STAKE_ACCOUNT_ADDRESS> <RECIPIENT_ADDRESS> <AMOUNT>solana-keygen new --no-passphrase -o ~/.config/solana/stake_<VOTE_PUBKEY>.jsonsolana create-stake-account stake_<VOTE_PUBKEY>.json 1000solana delegate-stake stake_<VOTE_PUBKEY>.json <VOTE_PUBKEY>solana stakes <VOTE_PUBKEY>2. Security: Since the data is never decrypted during processing, the risk of exposure due to a breach or insider threat is significantly reduced. This enhances the overall security of the system handling the data.
3. Regulatory Compliance: Many industries, such as healthcare and finance, are subject to strict regulations regarding data privacy. Homomorphic encryption can help organizations comply with these regulations by enabling them to process data in an encrypted form.
4. Data Utility: It allows for meaningful analysis and computation on encrypted data without compromising its security. This is particularly useful in collaborative environments where different parties need to work with shared data without revealing the actual contents.
With linear bandwidth scaling and decryption run times in the tens-of-milliseconds, encryption is more efficient and lightweight.
Programmable encryption and conditional decryption can be useful for a wide range of use cases including encrypted on-chain intents (limit orders, stop-loss orders, programmable trading), bad-MEV prevention, private governance, censorship and front-running resistant shared sequencing, rollups, on-chain gaming, legal contracts, randomness generation oracles, and any scenario where asymmetric information limits the use of a decentralized application.
From any application frontend, a user can encrypt a transaction by declaring its condition for decryption. The encrypted transaction is then submitted to the network. Multiple transactions can be encrypted under the same conditions and decrypted in batches.
Once the condition for decryption is met, validators are notified and work together to generate a threshold decryption key. The private key is then sent to the destination chain to decrypt the encrypted transaction.
The private key is used to decrypt all encrypted transactions for each decryption condition, which are then executed on the network upon which the app resides.
Get X1 supply (l for localnet):
Get Solana Supply (m for mainnet):
Get stake history (shows stake history for 10 last epochs):
Epoch info:
Block production % per epoch
solana validator-info getsolana config set -k identity.jsonsolana validator-info publish "Validator name" -w "website" -i "icon URL" -k path/<identity keypair>By reducing the validator operational costs, thus achieving a 40x increase in cost efficiency, the aggregate cost to run the whole network is also lower.
cat id.jsonOpen you Backpack wallet extension and go to settings. Choose "Solana".
RPC Connection: Custom
Custom
Add RPC endpoint
https://rpc.testnet.x1.xyzBase fees accounting for actual demand
Transaction pricing on the X1 Blockchain includes base fees and priority fees.
Unlike Ethereum's single-thread execution, X1—similar to Solana—leverages the server's expanded capacity through parallelism, allowing transactions to be executed across multiple threads. Priority fees act as a voluntary tip to validators set by users, facilitating prioritization within the local fee market threads.
Solana's transaction fee market is primarily driven by local fee markets that prioritize transactions within individual blocks. The base fee for transaction signatures on Solana is statically set at a fixed rate of 5,000 lamports. This fixed fee structure is inadequate for applying sufficient economic back pressure during periods of high network demand. As a result, the network is more vulnerable to spam transactions, which can reduce overall network efficiency and degrade the quality of service. Solana does not employ a global block Compute Unit (CU) accounting mechanism. Without this, transaction fees do not dynamically adjust based on the overall demand on the network's computational resources. During peak periods, the lack of a global adjustment mechanism can lead to network congestion and performance bottlenecks, as low-priority transactions continue to compete for compute resources without any economic disincentive.
The X1 Blockchain dynamically adjusts base fees based on global compute-unit (CU) congestion across its threads. The transaction fee structure accounts for the computational resources consumed by transactions, which are constrained by a block size of 48M CU per block. Transactions that require higher CUs incur proportionally higher fees. As thread capacity becomes more utilized and blocks fill up, a multiplier is applied to transaction pricing, making them progressively more expensive.
This dynamic fee scaling mechanism on the X1 Blockchain is designed to create economic back pressure that aligns transaction prioritization with available network capacity. As the network load increases, the corresponding rise in transaction fees discourages non-essential transactions (including bots and spam), reducing the likelihood of network congestion. This strategy optimizes resource allocation and ensures sustained high performance under varying network conditions.
Ethereum utilizes a single-threaded execution model with global base fees, which leads to high transaction costs during periods of network congestion.
Solana, on the other hand, features multithreaded execution that supports local fee markets, generally resulting in lower transaction costs. However, its base fee pricing mechanism does not account for a transaction's computational usage, allowing spam transactions to occupy computational resources.
The X1 Blockchain combines the strengths of both systems. It supports parallel execution utilizing multiple threads, which allows for local fee markets similar to Solana. Additionally, X1 adopts a global base fee enforcement strategy with congestion-reflective dynamic base fees. This approach discourages spam by adjusting transaction fees based on the actual computational load they impose on the network, ensuring that transaction pricing is fair and proportionate to resource usage.
The implementation of dynamic base fees on the X1 Blockchain offers several beneficial effects:
The dynamic base fee system ensures that the cost of a single transaction remains low and only increases when the usage of the chain rises. Additionally, as the chain experiences more usage, validators earn progressively more, and the chain itself becomes more deflationary due to increased native coin burns.
This deflationary mechanism means that over time, the blockchain becomes more valuable with fewer transactions. Validators benefit from higher earnings without needing a corresponding increase in transaction volume.
By aligning resource allocation with actual demand, the pricing strategy for X1's transaction fees promotes efficient, cost-effective transactions while minimizing spam. This approach ensures that the blockchain maintains high performance and reliability even as it scales.
Running Pinger allows the validator to share leader data transmission times with the network.
When an RPC receives a transaction message to be included in a block, it must forward the message to the leader. Fast data transmission to the leader is crucial for network efficiency and consensus speed.
Monitoring your validator's ping times ensures it meets leader schedule performance requirements.
Ping times measure how long it takes for your validator to communicate with the current leader. Since the leader changes every four slots (1.6s), ping time variance (typically 500–3000ms) depends on the distance to the constantly switching leader.
Ensure the keypair has funds, as a transaction is sent with each ping.
Enable UFW if it is not already enabled:
Allow traffic on port 3334:
Check the service status:
View logs to ensure correct operation:
Once running, your validator will continuously measure and share leader ping times.
You can also check your ping times on x1val.online:
The various ways to make rewards as a validator on X1
Voting rewards from inflation
Commission from delegators
Block rewards from block production
Bootstrap bonus
In the X1 consensus mechanism, validators cast votes on blocks proposed by the leader. Throughout each epoch, validators accumulate credits for their votes, which can be exchanged for a portion of the epoch's inflation rewards. The inflation rewards are allocated based on the number of credits each validator earns during the epoch; a validator's percentage of the total credits determines their share of the inflation rewards. This share is then adjusted according to the validator's stake relative to the total staked amount.
X1 also features a predefined inflation schedule. It begins at 8% and decreases annually by 15%, aiming for a long-term inflation rate of 1.5%.
X1 operates on a delegated proof-of-stake (dPoS) blockchain system. Holders of XNT can delegate their coins to a validator, increasing that validator's staking weight. This not only enhances the validator's potential rewards from inflation through increased voting power but also boosts their chances of being selected as the leader to earn block rewards. Validators set their own commission rates, typically around 10%, which they deduct from the inflationary voting rewards; the remaining rewards are distributed to the stakers.
Transaction fees on the network are awarded to the leader who successfully produces a block. High-performing nodes have a greater chance of being selected as the leader. RPC forwards transactions to the leader for execution. If the leader manages to produce a block that is confirmed by over two-thirds (67%) of the validator cluster, weighted by stake and performance(), they receive the block rewards, which consist of the transaction fees paid by the network's users for that block.
To support the growth of the network's infrastructure, validators receive additional incentives through a bootstrap program. Eligibility for the bootstrap bonus requires meeting specific criteria. The distribution of the for each epoch is determined by a validator's score, which is based on a credit system.
The only direct cost for validators is the hardware expense, which varies depending on the server supplier. To learn more, please read about the for the X1 blockchain.
A major differentiator from Solana is that validators on the X1 Blockchain don't pay for votes. This significantly lowers the barriers to starting and maintaining a validator. Read more about.
Optimising consensus with validator subcommittees
Once a transaction is executed by the leader, it is immediately recorded to the validator’s copy of the ledger and propagated to the rest of the network. After a block has achieved the necessary votes from consensus, the transaction is considered "confirmed." Finally, a block is considered "finalized" when 31+ confirmed blocks have been built upon it. These stages are returned through the RPC back to the front-end, allowing the user to see the status of their transaction.
Many blockchain networks construct entire blocks before broadcasting them, known as discrete block building. Solana, in contrast, employs continuous block building which involves assembling and streaming blocks dynamically as they are created during an allocated time slot, significantly reducing latency.
For a block to gain acceptance, all transactions within it must be valid and reproducible by other nodes. The leader divides the block into smaller pieces called "shreds." Each shred contains a portion of the block data and is linked to others using cryptographic hashes.
The leader broadcasts the shreds to the validator nodes in the cluster. This is typically done via a gossip protocol to ensure wide and efficient dissemination of data. Validators collect shreds and reconstruct the full block from the received shreds.
Validators verify the integrity and validity of the block by checking the PoH hashes and ensuring the transactions are valid and have not been double-spent.
Once a validator verifies a block, it sends a vote transaction to the rest of the network, indicating that the block is valid. When enough votes are collected, the network reaches consensus, and the block is added to the blockchain.
In the Solana blockchain, for a block to be confirmed, a supermajority of votes from the validators in the cluster is required. Specifically, this supermajority is defined as two-thirds (or 66.6%) of the total voting power in the network.
While the network is small this model works very well but will lead to scaling challenges over time, best addressed by the concept of algorithmic complexity.
Addressing scalability while maintaining security and decentralization presents a significant hurdle for blockchain technologies. Traditional blockchains, such as Bitcoin and Ethereum, encounter challenges in this area, frequently experiencing bottlenecks with the expansion of network size and activity. These networks utilize a Nakamoto-style consensus mechanism, necessitating that end-users compete through transaction fees for block inclusion.
Due to the static nature of block size and the fixed frequency of block creation, in contrast to the continuously increasing demand for network usage, this mechanism results in heightened transaction fees and prolonged confirmation times during peak demand periods.
Unlike Bitcoin and Ethereum, blockchains like Fantom and Solana adopt an alternative approach by implementing asynchronous block voting mechanisms. This method requires a minimum participation of two-thirds of all network nodes in message exchanges that facilitate voting. The outcome of this process is a consensus complexity that grows quadratically, denoted mathematically as O(n^2). Although this strategy can be highly efficient and swift for the user's transactions, it inherently restricts the network's scalability with increasing node numbers.
Avalanche employs a novel strategy to mitigate complexity through the introduction of neighborhoods. By adopting a gossip-like structural mechanism, Avalanche effectively reduces its consensus complexity to a logarithmic scale, or O(log(n)). This advancement marks a significant leap towards enhancing scalability without a proportional increase in consensus complexity, while maintaining fast transaction execution times.
The X1 Blockchain improves its consensus algorithm by incorporating subcommittee voting, a concept borrowed from the HotStuff2 consensus model. This enhancement addresses the inefficiencies found in traditional blockchain networks, such as Solana, where the voting process is O(n^2) due to the flat network topology requiring all nodes to participate in voting. This excessive communication creates a bottleneck, slowing down the consensus process.
In contrast, the X1 Blockchain employs a more streamlined approach by selecting a smaller subset of nodes, or subcommittees, to handle voting and validation. This reduces the overall communication overhead and computational load, enabling faster consensus with fewer resources. By supporting an indefinite number of nodes (n) while only requiring a limited group (x) to vote, the X1 Blockchain maintains a constant time complexity, O(1), for consensus operations.
This strategic use of subcommittees enhances the scalability and efficiency of the X1 Blockchain, allowing it to handle larger networks and higher transaction volumes without the drawbacks of traditional consensus methods. The adoption of subcommittee voting ensures that the network remains both scalable and secure, providing a robust foundation for future growth.
First create a read-only node and then continue with below steps. To turn the read-only node into a validator node you need to fund your identity account and create vote and stake accounts.
Check balance:
Creates a vote account with vote.json keypair + identity.json as identity + the withdrawer address and commission.
--commission: If you get a delegation then anything they make you get 10%.
Check vote account:
Check stake account:
Do delegate stake operation:
Check condition of a stake:
See validators. Your identity pubkey should show up there:
Monitor ledger, is continuous monitoring:
Block production, see skipped slots:
Check leader schedule:
or:
Check block production:
Check epoch information:
Check CPU and memory:
Type (shift+h). CPU should be lower than 100% with margin.
Check validator process:
Kill validator process:
Validators should see the guide for validator migration instructions.
This guide details the steps taken to migrate the X1 Testnet Chain to v2.0 to document the process for future reference.
Do not preform these steps.
On the bootstrap node, the following steps were taken to migrate the X1 Testnet Chain to v2.0:
Install Tachyon v2.0
Stop the Validator
Join the X1 validator community
Other X1 related community groups:
solana validator-info publish "Jack's Ryzen Test Node" -w "https://x1.xyz" -i "https://x1.xyz/ryzen.jpg" -k .config/solana/identity.jsonsolana validator-info get <Info address>solana -ul supplysolana -um supplysolana -ul stake-historysolana epoch-infohttps://rpc.mainnet.x1.xyzsolana transfer identity.json 10 --allow-unfunded-recipientsolana balance identity.json







Set Environment Variables:
export PARENT_DIRECTORY=/data/x1-testnet
export LEDGER_PATH=$PARENT_DIRECTORY/ledger
export ACCOUNTS_PATH=$PARENT_DIRECTORY/accounts
export SNAPSHOTS_PATH=$PARENT_DIRECTORY/snapshots
export VOTE_ACCOUNT_KEYPAIR=/var/lib/x1/vote-account-keypair.jsonBackup the keys and ledger data
Create a snapshot using solana-ledger-tool with a Hard Fork: deactivate the development feature gates & deactivate all other stake accounts.
Modify the Genesis to switch the cluster type from development to testnet
Update Startup Config
⚠️ Warning: You might encounter an error related to a mismatch in the shred version. If this happens, note the reported shred version and update the --expected-shred-version argument with the correct value.
Once the validator starts, monitor the logs for block processing and ensure the validator is running correctly. Announce the restart on Telegram and provide instructions for other validators to update their nodes.
Don't forget to remove the --wait-for-supermajority, --expected-shred-version, and --expected-bank-hash arguments after the restart.



rsync -av $PARENT_DIRECTORY/ $PARENT_DIRECTORY.bak/export SLOT_X=$(tachyon-ledger-tool --output json-compact --ledger $LEDGER_PATH latest-optimistic-slot | tail -1 | awk '{print $1}')
export VOTE_ADDRESS=$(solana address -k "$VOTE_ACCOUNT_KEYPAIR")
export OTHER_STAKERS=$(solana --url https://rpc.testnet.x1.xyz validators --output json-compact | jq -r '.validators[].voteAccountPubkey' | grep -v "$VOTE_ADDRESS" | xargs xargs -0 printf "--destake-vote-account %s ")
snapshot_output=$(tachyon-ledger-tool create-snapshot $SLOT_X \
--ledger $LEDGER_PATH \
--incremental-snapshot-archive-path $SNAPSHOTS_PATH \
--snapshots $SNAPSHOTS_PATH \
--hard-fork $SLOT_X \
--destake-vote-account $OTHER_STAKERS \
--warp-slot $(($SLOT_X+1001)) \
--enable-capitalization-change \
--deactivate-feature-gate \
tvcF6b1TRz353zKuhBjinZkKzjmihXmBAHJdjNYw1sQ decoMktMcnmiq6t3u7g5BfgcQu91nKZr6RvMYf9z1Jb 7uZBkJXJ1HkuP6R3MJfZs7mLwymBcDbKdqbF51ZWLier HFpdDDNQjvcXnXKec697HDDsyk6tFoWS2o8fkxuhQZpL 2KKG3C6RBnxQo9jVVrbzsoSh41TDXLK7gBc9gduyxSzW 2ry7ygxiYURULZCrypHhveanvP5tzZ4toRwVp89oCNSj 3sioPumDoSRarqzp442ETGUvTCLADgU9eFzKJj375B23 41tVp5qR1XwWRt5WifvtSQyuxtqQWJgEK8w91AtBqSwP 5TuppMutoyzhUSfuYdhgzD47F92GL1g89KpCZQKqedxP 8aXvSuopd1PUj7UhehfXJRg6619RHp8ZvwTyyJHdUYsj 8oBxsYqnCvUTGzgEpxPcnVf7MLbWWPYddE33PftFeBBd 9LZdXeKGeBV6hRLdxS1rHbHoEUsKqesCC2ZAPTPKJAbK 9onWzzvCzNC2jfhxxeqRgs5q7nFAAKpCUvkj6T6GJK9i BeCY6VL4CKQR2QUwe9w3iRtNMN91FMW1sXbRzwfc3WYc CJzY83ggJHqPGDq8VisV3U91jDJLuEaALZooBrXtnnLU DT4n6ABDqs6w4bnfwrXT9rsprcPf6cdDga1egctaPkLC EBq48m8irRKuE7ZnMTLvLg2UuGSqhe8s8oMqnmja1fJw EaQpmC6GtRssaZ3PCUM5YksGqUdMLeZ46BQXYtHYakDS EenyoWx9UMXYKpR8mW5Jmfmy2fRjzUtM7NduYMY8bx33 G6ANXD6ptCSyNd9znZm7j4dEczAJCfx7Cy43oBx3rKHJ GDH5TVdbTPUpRnXaRyQqiKUa7uZAbZ28Q2N9bhbKoMLm Gz1aLrbeQ4Q6PTSafCZcGWZXz91yVRi7ASFzFEr1U4sa HTW2pSyErTj4BV6KBM9NZ9VBUJVxt7sacNWcf76wtzb3 capRxUrBjNkkCpjrJxPGfPaWijB7q3JoDfsWXAnt46r chaie9S2zVfuxJKNRGkyTDokLwWxx6kD2ZLsqQHaDD8 qywiJyZmqTKspFg2LeuUHqcA5nNvBgobqb9UprywS9N wLckV1a64ngtcKPRGU4S4grVTestXjmNjxBjaKZrAcn zk1snxsc6Fh3wsGNbbHAJNHiJoYgF29mMnTSusGx5EJ zkNLP7EQALfC1TYeB3biDU7akDckj8iPkvh9y2Mt2K3 zkiTNuzBKxrCLMKehzuQeKZyLtX2yvFcEKMML8nExU8)QQ
echo $snapshot_output
export SLOT_X=$(echo $snapshot_output | awk '{print $8}')
export BANK_HASH=$(echo $snapshot_output | awk '{print $16}' | sed 's/:$//')tachyon-ledger-tool modify-genesis --ledger $LEDGER_PATH \
--cluster-type testnet /tmp/modify-tachyon-genesis/
rsync -av /tmp/modify-tachyon-genesis/ $LEDGER_PATH/
rsync -av /tmp/modify-tachyon-genesis/rocksdb/ $LEDGER_PATH/rocksdb/
export SHRED_VERSION=$(tachyon-ledger-tool --ledger $LEDGER_PATH shred-version)
echo export SLOT_X=$SLOT_X;
echo export BANK_HASH=$BANK_HASH;
echo export SHRED_VERSION=$SHRED_VERSIONtachyon-validator \ # <-- Update the binary name from solana-validator to tachyon-validator.
--wait-for-supermajority $(($SLOT_X + 1)) \ # <-- NEW! IMPORTANT! REMOVE AFTER THIS RESTART
--expected-shred-version $SHRED_VERSION \ # <-- NEW! IMPORTANT! REMOVE AFTER THIS RESTART
--expected-bank-hash $BANK_HASH # <-- NEW! IMPORTANT! REMOVE AFTER THIS RESTART
...solana create-vote-account vote.json identity.json <WITHDRAWER_PUBKEY> --commission 10solana vote-account <vote pubkey>solana create-stake-account stake.json 10solana account stake.jsonsolana delegate-stake stake.json vote.jsonsolana stake-account stake.jsonsolana validatorstachyon-validator --ledger ledger/ monitorsolana block-productionsolana leader-scheduletail -f nohup.out | grep -i "My next leader slot"solana block-productionsolana epoch-infotop htopps aux | grep validator tachyon-validator exit -f







Attribute
Value
Validator Version
tachyon-validator: v2.0.21
Restart Slot
48853558
During the restart and migration to v2.0, the testnet will undergo a fork. A new snapshot and genesis will be created, and the network will restart using the updated version.
To streamline the process, all stake accounts will be deactivated, requiring stakers to re-delegate their accounts upon rejoining the network. Rest assured, the original stake amounts will be preserved.
Follow the steps below to migrate your validator to v2.0.
You will need Tachyon v2.0 to generate the correct snapshot.
Clone the Tachyon repository:
Build the repository:
Update your PATH environment variable:
Replace
/path/to/tachyonwith the correct path. Usepwdto get your current directory path.
If your validator is still running, stop it before proceeding.
Backup your keys 🗝️: Save your validator keys in a secure location.
Backup or remove your old ledger directory:
If you have sufficient disk space, keep the old ledger as a backup.
Otherwise, you can delete it to free up space.
Update the binary name in your startup script:
Rename from solana-validator to tachyon-validator
See the example start script .
Start your validator using the updated configuration.
Make sure your validator is in sync with the network:
You should see the following output:
⚠️ Warning: Do not re-delegate your stake accounts until you're fully synced with the network.
After the restart, you will need to re-delegate your stake accounts. Your original stake amounts will be preserved.
Note: Replace paths with the correct paths to your identity, stake, and vote account files.
📌 Tip: Your stake will take a few epochs to activate and gradually grow to its full amount.
Check the logs to ensure your validator is running correctly.
Use the solana validators command to verify your validator's status.
Monitor the network and assist others during the migration process.
See the for the exact steps that will be taken to migrate the X1 Testnet Chain to v2.0 on the bootstrap node. These steps will be executed by the X1 team.
Bootstrap Bonus v1 rewards early, performant, and decentralized validators who actively strengthen the X1 network. It accelerates validator growth and decentralization across the consensus layer.
Bootstrap Bonus rewards early, performant, and decentralized validators who actively strengthen the X1 network. The purpose of the Bootstrap Bonus is to accelerate the growth of the validator network and enhance decentralization across the consensus layer.
It complements the Incentivized Testnet Rewards and distributes additional rewards based on validator credits and self-stake performance.
This program will remain active until further notice, after which it may evolve into Bootstrap Bonus v2 with updated parameters and incentive structures.
Base Reward: Distributed proportionally based on credits earned.
Performance Bonus: An additional +16 % awarded to validators who meet all Bootstrap Criteria.
Bootstrap Bonus v1 is based solely on self-stake (delegated stake does not count). To maintain decentralization and fair participation:
Minimum self-stake: 1 000 XNT
Maximum rewarded self-stake: 10 000 XNT
Validators may self-stake more than 10 000 XNT, but only the first 10 000 XNT will be eligible for the +16 % performance bonus.
Example: A validator with 1 000 XNT self-stake qualifies fully. A validator with 12 000 XNT self-stake will receive the bonus only on the first 10 000 XNT, while the remaining 2 000 XNT will not earn additional rewards.
To qualify for the Bootstrap Bonus, validators must maintain the following baseline performance defined in the :
Each individual or entity may operate a maximum of 10 validators.
Running more than 10 validators, or attempting to split identity (Sybil) or circumvent this limit, results in disqualification from the program.
Validators must adhere to the spirit of decentralization and act in good faith. Any manipulation or gaming of the system will lead to immediate removal.
Bootstrap Bonus v1 is a dynamic system. The X1 Foundation may adjust thresholds, bonus rates, or eligibility parameters at any time to maintain fairness, performance, and decentralization. Future updates will be released as Bootstrap Bonus v2, introducing new metrics and incentive structures as the validator ecosystem matures.
Bootstrap Bonus v1 is designed to:
Accelerate the growth of the validator network
Encourage self-commitment and fair participation
Enhance decentralization of the consensus layer
Reward high-performing, independent operators
The first DeFi protocol of X1 is not a DEX or a lending market — it is the Bootstrap Bonus. By making validation itself the first yield-bearing activity, X1 aligns incentives directly with the security and decentralization of the chain.
This is what truly sets X1 apart from most other blockchains — the ambition to let everyone participate at the core of the network, not just at the edge. Validators on X1 can participate in consensus with minimal barriers and also participate in block production itself.
In alignment with this vision, X1 introduces the democratization of block production, where randomness and validator performance define eligibility — not stake dominance. Learn more:
In short: Validating becomes the first DeFi of X1.
The Bootstrap Bonus will initiate after X1 completes its initial launch configuration and key operations.
Step-by-step guide for withdrawing accumulated validator rewards using a Ledger hardware wallet as the withdraw authority.
Once your validator’s withdraw authority has been moved from a local keypair (id.json) to a Ledger hardware wallet, you can securely withdraw accumulated rewards directly from your vote account using your Ledger.
You’ll use these across commands:
To get your Ledger’s public key (run on your computer, Solana app open):
Confirm the Withdraw Authority matches your Ledger public key.
Note the Account Balance; keep the account rent-exempt (don’t withdraw to zero).
Plug in Ledger, unlock, open Solana app (close Ledger Live).
Verify the CLI sees your device:
You should see your Ledger address printed.
Withdraw a specific amount (example: 5 XNT):
If your local default wallet has no funds for fees, make the Ledger pay the fee too:
Tips
Keep the vote account rent-exempt by leaving a buffer.
If you hit “unfunded recipient”, first withdraw to your Ledger address, then transfer elsewhere.
Check the vote account:
Check the destination wallet:
Create fungible tokens and NFTs on X1
The Metaplex Protocol is a decentralized protocol for Solana and the SVM ecosystem, designed to facilitate the creation, sale, and management of digital assets.
It is the preferred platform for digital asset creation and management on X1, offering tools and standards for developers, creators and businesses to build decentralized applications.
Known for powering digital assets including NFTs, fungible tokens, RWAs, gaming assets, DePIN assets and more, Metaplex is one of the most widely used blockchain protocols and developer platforms, with over 880M assets minted for a total transaction value of $9.8B (as of April 2025).
This is a Summary of all the products deployed on X1
The Metaplex Token Metadata Program is the de facto standard Fungible Tokens on Solana and across all SVMs offering the widest level of support by SVM dApps and protocols. It is built on the SPL-Token and SPL-Token-2022 token programs.
The Token Metadata program supports a wide array of Token Standards depending on the requirements of the creator.
Metadata can easily be attached to SPL-Tokens using the Token Metadata program to make Fungible or Semi-Fungible tokens. The Token Metadata program attaches a Metadata account to tokens to make them recognized and readable across dApps, protocols and explorers.
Token Metadata offers a basic standard for supporting non-fungible digital assets, allowing users to create onchain art, PFPs, or other singular assets. It supports common functionality such as delegation, sales, owned escrow (e.g. ERC-6551 equivalent) and more.
All of the same functionality of Non-Fungible Tokens with additional programmability and royalty enforcement. Programmable NFTs support attaching a ruleset to an NFT that can prevent the asset from being sold or delegated to malicious platforms, marketplaces that don’t support royalties, and more.
Token Metadata also includes the ability to print editions, commonly used for 1/1 or prints of artwork. The protocol utilizes a Master Edition NFT that can have derivative artworks printed off as numbered edition copies.
More information and details on developing with Token Metadata can be found below:
Documentation:
Javascript Package:
Rust Crate:
Metaplex Core is the most programmatic standard for NFTs powering the next generation of digital assets on X1. It’s already been adopted by all major dApps and protocols for creating the next wave of NFTs. Metaplex Core offers all of the functionality of the previous Token Metadata standard and more, all while improving efficiency and cost by an order of magnitude.
Core supports the same features as Token Metadata such as Editions and Royalties enforcement, while also enabling new functionality through its novel Plugin system. The Plugin system creates a common interface that allows additional features to be added to an asset dynamically, even going so far as to allow third party integrations installed directly to NFTs. Popular plugins include Royalty, Attributes, Autograph, and more!
The Core protocol utilizes a single account design that allows it to achieve the smallest onchain footprint, reducing the overall rent cost to the smallest possible amount. This compact, single account also allows the protocol to abstract away many of the complexities of SVM by utilizing its advanced Plugin system, allowing users to have all of the flexibility of a custom protocol without having to develop a new program.
More information and details on developing with Token Metadata can be found below:
Documentation:
Javascript Package:
Rust Crate:
The Metaplex Candy Machine protocol is the simplest way to deploy and launch NFT collections on the SVM. It works by deploying a lazy-minting protocol that stores the asset data for an entire collection and allows minters to mint assets from the collection. Candy Machine supports a wide array of “Guards” which offer a range of conditions that must first be met in order to mint an asset from the collection.
Popular Guards include Sol Payment, which represents the sale price; Token Gate, which can be used to gate the collection to an allowlist token mint; Token Payment, which allows payment in a custom token of the creator’s choosing; Start Date, which establishes the start time of the sale. Candy Machine currently supports over 20 guards with more being added regularly!
More information and details on launching with Candy Machine can be found below:
Candy Machine for Token Metadata:
Candy Machine for Core:
Unlocking Hardware Potential: Dynamic Thread Scaling in X1 Blockchain's Execution Scheduler
Once a user signs the transaction in their wallet, the wallet sends the transaction to a X1 RPC server. RPC servers can be run by any validator. Upon receiving the transaction, the RPC server checks the leader schedule (determined once per epoch, about 2 days long) and forwards the transaction to the current leader as well as the next two leaders. The leader is in charge of producing a block for the current slot, and is assigned four consecutive slots. Slots usually last around 400 milliseconds.
Creating fair incentives for high performant validator nodes
Leader
Solana stands out because it was designed from the outset to operate without a mempool. Unlike traditional blockchains that use gossip protocols to randomly and broadly propagate transactions across the network, Solana forwards all transactions to a predetermined lead validator, known as the leader, for each slot. Once an RPC receives a transaction message to be included in a block, it must be forwarded to the leader.
A leader schedule is produced before every epoch (approximately every two days). The upcoming epoch is divided into slots, each fixed at 400 milliseconds, and a leader is chosen for each slot. The sequence of leaders is determined ahead of time, and validators know when they will become the leader. This rotation happens very quickly, with leaders changing every few hundred milliseconds.
solana config set -k ~/.config/solana/id.jsonsudo ufw enablesudo ufw allow 3334/tcp# Update and install dependencies
sudo apt update
sudo apt install -y nodejs jq
# Clone the Pinger repository
git clone https://github.com/x1-labs/x1-pinger/
cd x1-pinger
npm install
# Set up the system service
cp system/x1-pinger.service /etc/systemd/system/x1-pinger.service
# If installed in a different directory, update the service file accordingly
# nano /etc/systemd/system/x1-pinger.service
# Enable and start the service
sudo systemctl enable --now x1-pingersudo systemctl status x1-pingerjournalctl -u x1-pinger -fsudo systemctl stop x1-pingercurl http://localhost:3334/ping_times | jqProduct
Description
The Token Metadata program supports a wide array of Token Standards depending on the requirements of the creator. It’s the defacto standard for Fungible Tokens
The most programmatic standard for NFTs powering the next generation of digital assets
The simplest way to deploy and launch NFT collections on the SVM. It works both with Token Metadata and Core NFTs
Tachyon Version
≥ v2.2.17
Must run the latest compatible version of Tachyon.
Strengthen the resilience and quality of the validator set
Lay the foundation for future Bootstrap Bonus v2 incentives
Active Validator
Required
Must be part of the active validator set.
Max Commission
10 %
Commission must not exceed 10 %.
Vote Credits
≥ 97 % of network average
Demonstrates consistent performance and uptime.
Skip Rate
≤ 10 % above network average
Validators with excessive missed votes are disqualified.
VOTE_ACCOUNT_PUBKEY
Your validator vote account address
vote.json
RPC
Cluster endpoint
https://rpc.testnet.x1.xyz
LEDGER_PATH
Ledger device path
usb://ledger?key=0/0
DESTINATION_PUBKEY
Wallet to receive rewards
Usually the same as your Ledger address
no device found / protocol error
Solana app not open or USB not recognized
Open Solana app, close Ledger Live, reconnect
insufficient funds for fee
Fee-payer empty
Add --fee-payer "usb://ledger?key=0/0" or fund your local wallet
signature verification failed
Wrong withdraw authority or wrong cluster
Re-check show-vote-account output and --url
unfunded recipient
Destination wallet never funded
Withdraw to your Ledger address first, then forward
~/.bashrc or ~/.zshrc to make the change permanent.Verify the installation:
You should see the following output:
Note: Replace paths with your actual ledger directory and backup location.
Shred Version
41710
Bank Hash
BNd9PkMbZmEHGeErE6amj9Fb3phG5BsPH4FFbQfa2dVx
solana-keygen pubkey "usb://ledger?key=0/0"solana show-vote-account <VOTE_ACCOUNT_PUBKEY> --url <RPC>solana-keygen pubkey "usb://ledger?key=0/0"solana withdraw-from-vote-account \
<VOTE_ACCOUNT_PUBKEY> \
<DESTINATION_PUBKEY> \
5 \
--authorized-withdrawer "usb://ledger?key=0/0" \
--url <RPC>solana withdraw-from-vote-account \
<VOTE_ACCOUNT_PUBKEY> \
<DESTINATION_PUBKEY> \
5 \
--authorized-withdrawer "usb://ledger?key=0/0" \
--fee-payer "usb://ledger?key=0/0" \
--url <RPC>solana show-vote-account <VOTE_ACCOUNT_PUBKEY> --url <RPC>solana balance <DESTINATION_PUBKEY> --url <RPC>tachyon-validator --versiontachyon-validator 2.0.21 (src:00000000; feat:2908148756, client:Tachyon)git clone https://github.com/x1-labs/tachyon.gitcd tachyon
cargo build --releaseexport PATH=$PATH:/path/to/tachyon/target/release tachyon-validatorsolana catchup --our-localhostXXXXXXXXXXXXXXXXXXXXXXXXXX has caught up (us:XXXXXXXX them:XXXXXXXX)solana -k ~/identity.json delegate-stake ~/stake-account.json ~/vote-account.json# Move the ledger to a backup location
mv /home/ubuntu/ledger /path/to/backup/location
# Or delete the ledger
rm -rf /home/ubuntu/ledgerOnce the signed transaction reaches the current leader, the leader validates the transaction's signature and performs other pre-processing steps before scheduling the transaction for execution.
Whereas the EVM is a "single-threaded" runtime environment, meaning it can only process one contract at a time, the SVM is multi-threaded and can process more transactions in significantly less time. Each thread contains a queue of transactions waiting to be executed, with transactions randomly assigned to a queue.
The default scheduler implementation is multi-threaded, with each thread maintaining a queue of transactions waiting for execution. Transactions are randomly assigned to a single thread’s queue. Each queue is ordered by priority fee (denominated in fee paid per compute unit requested) and time.
Note that there is no global ordering of transactions queued for execution; there is just a local ordering in each thread’s queue.
Many blockchain networks construct entire blocks before broadcasting them, known as discrete block building. X1 and Solana, in contrast, employs continuous block building which involves assembling and streaming blocks dynamically as they are created during an allocated time slot, significantly reducing latency.
Each slot lasts 400 milliseconds, and each leader is assigned four consecutive slots (1.6 seconds) before rotation to the next leader. For a block to gain acceptance, all transactions within it must be valid and reproducible by other nodes.
Two slots before assuming leadership, a validator halts transaction forwarding to prepare for its upcoming workload. During this interval, inbound traffic spikes, reaching over a gigabyte per second as the entire network directs packets to the incoming leader.
Upon receipt, transaction messages enter the Transaction Processing Unit (TPU), the validator's core logic responsible for block production. Here, the transaction processing sequence begins with the Fetch Stage, where transactions are received via QUIC. Subsequently, transactions progress to the SigVerify Stage, undergoing rigorous validation checks. Here the validator verifies the validity of signatures, checks for the correct number of signatures, and eliminates duplicate transactions.
The banking stage can be described as the block-building stage. It is the most important stage of the TPU, which gets its name from the “bank“. A bank is just the state at a given block. For every block, X1 has a bank that is used to access state at that block. When a block becomes finalized after enough validators vote on it, they will flush account updates from the bank to disk, making them permanent. The final state of the chain is the result of all confirmed transactions. This state can always be recreated from the blockchain history deterministically.
Transactions are processed in parallel and packaged into ledger “entries,” which are batches of 64 non-conflicting transactions. Parallel transaction processing on X1 is made easy because each transaction must include a complete list of all the accounts it will read and write to. This design choice places a burden on developers but allows the validator to avoid race conditions by easily selecting only non-conflicting transactions for execution within each entry. Transactions conflict if they both attempt to write to the same account (two writes) or if one attempts to read from and the other writes to the same account (read + write). Thus conflicting transactions go into different entries and are executed sequentially, while non-conflicting transactions are executed in parallel.
In the above diagram, each box represents a single transaction. Each transaction is labeled with the accounts it locks. Execution thread 1 locks accounts [a,b,c], [d], fails to lock [c,j], and [f,g]. Execution thread 2 locks accounts [w], [x,y,z], fails to lock [c], and [v]. The remaining transactions are re-scheduled for future execution.
This is one way X1 and Solana achieves higher performance than competing chains. When multiple transactions don’t need to touch the same state, they can be executed in parallel which improves the throughput of the chain. However, this imposes a cost on developers as any piece of state that may be required by a transaction must be specified up front.
There are six threads processing transactions in parallel, with four dedicated to normal transactions and two exclusively handling vote transactions which are integral to X1 and Solana’s consensus mechanisms. All parallelization of processing is achieved through multiple CPU cores; validators have no GPU requirements.
Once transactions have been grouped into entries, they are ready to be executed by the Solana Virtual Machine (SVM). The accounts necessary for the transaction are locked; checks are run to confirm the transaction is recent but hasn’t already been processed. The accounts are loaded, and the transaction logic is executed, updating the account states. A hash of the entry will be sent to the Proof of History service to be recorded (more on this in the next section). If the recording process is successful, all changes will be committed to the bank, and the locks on each account placed in the first step are lifted.
Solana's high throughput and rapid transaction processing capabilities are largely attributed to its parallel processing architecture. However, a significant limitation in this architecture is the fixed number of banking threads allocated for scheduling transaction execution. Currently, Solana limits the number of banking threads to just four, irrespective of the underlying hardware's capabilities. This constraint results in underutilization of modern multi-core processors, which are increasingly common in node environments.
Banking threads in Solana are responsible for executing transactions, managing state changes, and processing smart contracts. While the parallelism in Solana's architecture theoretically supports high throughput, the artificial limitation of four banking threads leads to several inefficiencies:
1. Underutilization of Multi-Core Processors: Contemporary processors frequently feature 16, 32, or more CPU cores. The restriction to four banking threads on such hardware fails to harness the full computational potential, resulting in significant idle processing capacity.
2. Execution Bottlenecks: The limited number of threads introduces a bottleneck in the transaction processing pipeline, constraining the network's ability to handle peak transaction loads. This results in increased latency and reduced throughput.
3. Suboptimal Parallelism: The effectiveness of Solana's parallel processing is curtailed by the thread limitation. As a result, the full benefits of concurrent transaction processing are not realized, diminishing the overall efficiency of the network.
To address these limitations, X1 Blockchain introduces a dynamically scaling execution scheduler. This innovation allows the number of banking threads to scale in accordance with the CPU core count available on the node, optimizing the utilization of modern hardware.
Adaptive Thread Allocation: X1 Blockchain’s execution scheduler dynamically adjusts the number of banking threads based on the detected CPU core count of the node. For instance, on a node with a 32-core processor, the scheduler could allocate up to 32 banking threads, significantly enhancing transaction processing capacity.
Enhanced Parallelism: By leveraging additional threads, X1 Blockchain maximizes parallel transaction processing. This approach minimizes the bottlenecks associated with a limited thread count, leading to a more efficient execution pipeline.
Increased Throughput: The ability to process more transactions concurrently directly correlates with improved network throughput. As more threads are utilized, transaction confirmation times are reduced, particularly under high network load conditions.
Scalability and Future-Proofing: X1 Blockchain's dynamic thread scaling mechanism is designed to scale with advancements in hardware technology. As multi-core processors become more powerful and prevalent, the blockchain can seamlessly scale its execution capabilities, ensuring long-term viability and performance.
By aligning thread allocation with available hardware resources, X1 Blockchain aims to eliminate the bottlenecks that hinder transaction throughput in current blockchain architectures. This approach enhances network efficiency and future-proofs the blockchain against the limitations of present-day systems, ensuring that it can scale effectively as hardware technology evolves.
On your server:
Confirm the Withdraw Authority matches the id.json key you plan to replace.
Verify the pubkey of id.json:
This ensures you’re confirming the correct
id.jsonkey before updating its withdraw authority.
Plug in your Ledger, open the Solana app, and run:
Copy the output — this is your Ledger withdraw authority address:
You only need the public key. No Ledger connection is required on the server.
Example for X1 testnet:
Run this on the server:
Signs with your current withdraw authority (id.json)
Assigns your Ledger as the new withdraw authority
Pays the transaction fee from the same account
You are changing who can withdraw rewards. Validator operations are unaffected.
You should see:
Create a structured backup and store it in your password manager or encrypted vault:
If you’re on macOS:
This ensures only your current Linux user can access key files — protecting against accidental leaks or unauthorized reads.
Validator identity
identity.json
Operates the validator node
Vote account
vote.json
Account linked to validator for consensus/rewards
Withdraw authority (new)
Ledger hardware wallet
Securely holds withdraw rights
Withdraw authority (old)
id.json (backed up)
“insufficient funds for fee”
id.json has 0 balance. Send a small amount of testnet XNT to it before retrying.
“Signature verification failed”
id.json is not the current withdraw authority for this vote account, or you’re pointed at the wrong cluster.
“This account may not be used to pay transaction fees”
You tried to use a restricted account (e.g., validator identity) as fee-payer. Use a normal wallet or id.json itself.
Validators with a higher stake have a higher probability of being chosen as a leader within each epoch. During each slot, transaction messages are forwarded to the leader, who has the opportunity to produce a block. When it is a validator’s turn, they switch to "leader mode," begin actively processing transactions and broadcasting blocks to the rest of the network.
The Solana "skip rate"—the percentage of slots in which a block was not produced—varies from 2% to 10%. While forks are a primary reason for skipped slots, validator performance is another significant factor. Currently, the only selection criteria for becoming a leader is stake weight, which does not consider the performance of the validator.
If a validator is chosen as a leader but has poor performance due to network connection issues or hardware problems, they risk skipping slots they would have been rewarded for. This not only slows down the network but also increases the risk of transactions being dropped, which is detrimental to the chain's overall performance.
On Solana, there are nodes with significant stake weight that are regularly selected as leaders despite having poor historic performance. Some of these nodes exhibit skip rates exceeding 50%, yet they continue to be selected in the leader schedule. Solana currently does not reject poorly performing validators from the leader schedule, leading to inefficiencies in the network.
To optimize the system, it's clear that leader selection should be based on more than just stake weight. In addition to stake weight, the X1 Blockchain will introduce new criteria for eligibility in the leader schedule. The stake weight will be virtualized into a score, which can be adjusted based on a validator's historical performance. Should a validator's score drop due to poor performance, they will either be removed from the leader schedule or rejected from becoming a leader altogether.
This solution leverages existing economic incentives, encouraging validators to maintain high-functioning and high-performing nodes. By incorporating both relative stake and historical performance into the selection process, the X1 Blockchain aims to ensure that only the most capable validators are chosen as leaders, thereby improving the overall efficiency and reliability of the network.
In addition, randomness can play a critical role in combating centralisation over time. Cardano, for instance, employs a Verifiable Random Function (VRF) to ensure secure and unbiased randomness in its leader selection process. X1 will maintain Solana’s predictable leader schedule, but with a key difference: VRF will be integrated into the Anti-Collusion Protocol (ACP) to introduce randomness in leader selection. This will be based on the entire validator set, rather than relying solely on stake-weight as Solana does today. ACP is a powerful method to combat the centralization forces inherent in PoS systems.
The leader selection process is therefore multi-factorial, combining stake weight, randomness, and performance.

How validators are rewarded for their participation in the X1 testnet.
💡 TL;DR
Credits → XNT: 50,000 credits = 1 XNT
Immediate (10%): Claimable at genesis, stake right away
Vested (90%): Locked for 365 days, proportional unlock based on uptime + performance
Example: 1B credits = 20,000 XNT → 2,000 claimable + 18,000 vested
Goal: Fair rewards + immediate decentralization of the consensus layer at mainnet
The X1 incentivized testnet is designed to both reward validators for their participation and bootstrap decentralization at mainnet launch. By running validators, producing blocks, and voting on leader output, participants earn Validator Credits. These credits directly determine each validator’s allocation of XNT once X1 mainnet goes live.
Every time a validator votes on a block, they accumulate credits.
Credits reflect the validator’s uptime, participation, and consistency during the testnet.
At mainnet launch, credits are converted to XNT according to a fixed conversion ratio:
50,000 Credits = 1 XNT
For monitoring validator performance and accumulated credits, check .
Validators’ earned XNT will be distributed in two phases:
10% of earned XNT will be claimable at genesis.
Validators will be airdropped a small amount of XNT to cover gas for claiming.
This allocation enables validators to stake into their own node immediately, ensuring that X1 launches with strong decentralization for the consensus layer.
The remaining 90% of earned XNT will be subject to a 365-day lock with vesting.
To unlock the full allocation, validators must maintain their validator identity for the entire 1-year period.
If a validator goes offline earlier, they unlock only a proportional share of the 90%. For example:
Performance will also be factored into vesting eligibility. Validators with poor performance (low uptime, missed votes, or other criteria to be defined) will not receive their full rewards.
A dedicated Vesting Dashboard will allow validators to track their progress and vesting status over time.
Let’s assume a validator has accumulated 1,000,000,000 (1B) credits during the testnet.
Conversion to XNT
1B credits ÷ 50,000 = 20,000 XNT
Distribution Breakdown
10% immediately claimable
This model ensures that:
Rewards are fairly distributed based on actual testnet participation.
Validators are incentivized to secure the chain long-term.
X1 achieves immediate and sustainable decentralization for the consensus layer from day one.
All details outlined above are subject to change. The X1 team may adjust reward mechanics, ratios, performance criteria, or vesting structures prior to mainnet launch.
Basic setup for creating programs on X1 with Anchor
You can also use .
This section covers the steps to set up your local environment for X1 development.
On Linux, run this single command to install all dependencies.
After installation, you should see output similar to the following:
If the quick installation command above doesn't work, please refer to instructions below to install each dependency individually.
solana show-vote-account <VOTE_PUBKEY>solana-keygen pubkey ~/.config/solana/id.jsonsolana-keygen pubkey "usb://ledger?key=0/0"<LEDGER_PUBKEY>solana config set --url https://rpc.testnet.x1.xyzsolana vote-authorize-withdrawer-checked \
<VOTE_PUBKEY> \
~/.config/solana/id.json \
<LEDGER_PUBKEY> \
--fee-payer ~/.config/solana/id.json \
--url https://rpc.testnet.x1.xyzsolana show-vote-account <VOTE_PUBKEY> --url https://rpc.testnet.x1.xyz | grep "Withdraw Authority"Withdraw Authority: <LEDGER_PUBKEY>mkdir -p ~/X1-backups/keys
cp ~/.config/solana/id.json ~/X1-backups/keys/
PUBKEY=$(solana-keygen pubkey ~/X1-backups/keys/id.json)
SHA256=$(shasum -a 256 ~/X1-backups/keys/id.json | awk '{print $1}')
cat > ~/X1-backups/keys/id_key_metadata.txt <<EOF
--- PRIVATE KEY (id.json) ---
$(cat ~/X1-backups/keys/id.json)
--- PUBLIC KEY ---
$PUBKEY
--- SHA256 (file checksum) ---
$SHA256
EOFpbcopy < ~/X1-backups/keys/id_key_metadata.txt
echo "✅ id_key_metadata.txt copied to clipboard — paste it into your password manager."chmod 700 ~/.config/solana
chmod 600 ~/.config/solana/*.jsonFormer hot key, no longer used as withdraw authority







Running for the full 12 months = 100% of the vested portion.
Unlock timing: Regardless of participation length, vested tokens unlock only after 365 days from mainnet launch.
Unvested tokens are deposited into the X1 Stake Pool Delegation Program, where they continue to earn staking rewards over time.
90% locked & vested over 365 days = 18,000 XNT
Vesting Scenarios
Validator stays online for 3 months → unlocks ~4,500 XNT (¼ of the 18,000).
Validator stays online for full 12 months → unlocks all 18,000 XNT.
Unlock happens after 365 days, regardless of partial or full participation.
Important: Vesting is based on active validator uptime within the 365-day window.
Example: 6 months online → 3 months offline → 6 months online = ¾ unlocked, not 100%.
All locked tokens are forfeited after 365 days.
18,000 XNT
2B
40,000 XNT
4,000 XNT
36,000 XNT
5B
100,000 XNT
10,000 XNT
90,000 XNT
100M
2,000 XNT
200 XNT
1,800 XNT
500M
10,000 XNT
1,000 XNT
9,000 XNT
1B
20,000 XNT

2,000 XNT
Check that your id.json or hardware wallet public key has received the validator airdrop:
Your balance should be greater than 0.
Send 1 XNT (or a small amount) to both:
id.json — used to manage startup of the validator
identity.json — your validator identity key
Do this using Backpack Wallet.
Make sure you’re operating with the correct signer:
Check balances:
Navigate to where your keypair files are stored:
Create a vote account for vote.json, and assign your hardware wallet (Ledger) as the withdraw authority.
On your local computer, plug in your Ledger, open the Solana app, and run:
Copy the output — this is your Ledger withdraw authority address:
💡 You only need the public key from your Ledger. The Ledger does not need to be connected to your validator server.
Create a vote account for vote.json, using your identity.json keypair as the signer, and set your Ledger pubkey as the withdraw authority:
identity.json → signs the vote account creation and becomes the validator identity
<LEDGER_PUBKEY> → becomes the withdraw authority
--commission 10 → your validator keeps 10%, and delegators receive 90%
Confirm it was successfully created:
You should see details such as:
Identity pubkey
Withdraw authority
Commission %
Epoch credits and status
Create a stake account and deposit a small amount (e.g. 0.1 XNT):
This initializes stake.json using funds from your id.json.
Authorize your Ledger hardware wallet as the withdraw authority:
Now only your hardware wallet can withdraw or modify the stake account.
Send more XNT to your stake account from your hardware wallet using Backpack.
In Backpack:
From: your hardware wallet pubkey
To: your stake.json pubkey
Amount: the amount you want to stake to your validator
Verify balance:
Link your stake account to your vote account:
This command delegates your staked XNT to your validator’s vote account.
View your stake account details:
Example output:
The stake will become active starting from the listed epoch once the validator is participating in consensus.
Before switching to mainnet, clear your old testnet ledger to avoid conflicts:
Open your validator startup script:
Replace its contents with:
Save and exit the editor.
From your home directory:
Check validator logs:
Check catch-up status:
Monitor validator performance:
Shut down your validator cleanly:
Alternatively, check running processes:
And terminate with
kill <PID>if needed.
Delete the outdated ledger:
Make sure you're using the official X1 mainnet RPC:
Open your validator startup script:
Replace its contents with:
Replace its contents with:
Save and exit the editor.
Navigate to the folder where your Solana keypairs are stored:
Tell Solana CLI to use the correct identity key:
From your home directory, start the validator:
Monitor validator logs to verify it's running:
Inspect your stake account:
✅ If your stake was delegated before epoch 11, it remains valid.
❌ If delegated after epoch 11, you'll need to re-delegate.
Use this guide to re-delegate: 👉 https://docs.x1.xyz/validating/connect-validator-to-x1-mainnet
Do it: 👉 Validator Misc Commands
Check validator visibility and status:
Epoch 10 is the new genesis point after master node reset.
Stakes initiated prior to epoch 11 should auto-activate, but verify.
Validators must reset ledger and reconnect to RPC cleanly.
Need help? Ping us in Telegram.
solana config set --url https://rpc.mainnet.x1.xyzsolana balance <PUBKEY>solana config set -k ~/.config/solana/id.jsonsolana balance ~/.config/solana/id.json
solana balance ~/.config/solana/identity.jsoncd ~/.config/solanasolana-keygen pubkey "usb://ledger?key=0/0"<LEDGER_PUBKEY>solana create-vote-account vote.json identity.json <LEDGER_PUBKEY> --commission 10solana vote-account <VOTE_ACCOUNT_PUBKEY>solana create-stake-account stake.json 0.1solana stake-authorize stake.json --new-withdraw-authority <HW_WALLET_PUBKEY>solana balance ~/.config/solana/stake.jsonsolana delegate-stake stake.json vote.jsonsolana stake-account stake.jsonBalance: 100.1 XNT
Rent Exempt Reserve: 0.00228288 XNT
Delegated Stake: 100.0 XNT
Active Stake: 0 XNT
Activating Stake: 100.0 XNT
Stake activates starting from epoch: 9
Delegated Vote Account Address: <VOTE_PUBKEY>
Stake Authority: <STAKE_AUTHORITY_PUBKEY>
Withdraw Authority: <HW_WALLET_PUBKEY>rm -rf ~/ledgernano $HOME/bin/validator.sh#!/bin/bash
export RUST_LOG=solana_metrics=warn,info
exec tachyon-validator \
--identity $HOME/.config/solana/identity.json \
--vote-account $HOME/.config/solana/vote.json \
--entrypoint entrypoint0.mainnet.x1.xyz:8001 \
--entrypoint entrypoint1.mainnet.x1.xyz:8001 \
--entrypoint entrypoint2.mainnet.x1.xyz:8001 \
--entrypoint entrypoint3.mainnet.x1.xyz:8001 \
--entrypoint entrypoint4.mainnet.x1.xyz:8001 \
--known-validator 7ufaUVtQKzGu5tpFtii9Cg8kR4jcpjQSXwsF3oVPSMZA \
--known-validator 5Rzytnub9yGTFHqSmauFLsAbdXFbehMwPBLiuEgKajUN \
--known-validator 4V2QkkWce8bwTzvvwPiNRNQ4W433ZsGQi9aWU12Q8uBF \
--known-validator CkMwg4TM6jaSC5rJALQjvLc51XFY5pJ1H9f1Tmu5Qdxs \
--known-validator 7J5wJaH55ZYjCCmCMt7Gb3QL6FGFmjz5U8b6NcbzfoTy \
--only-known-rpc \
--log $HOME/validator.log \
--ledger $HOME/ledger \
--rpc-port 8899 \
--full-rpc-api \
--dynamic-port-range 8000-8020 \
--wal-recovery-mode skip_any_corrupted_record \
--limit-ledger-size 50000000 \
--enable-rpc-transaction-history \
--enable-extended-tx-metadata-storage \
--rpc-pubsub-enable-block-subscription \
--full-snapshot-interval-slots 5000 \
--maximum-incremental-snapshots-to-retain 10 \
--maximum-full-snapshots-to-retain 50cd $HOME
nohup $HOME/bin/validator.sh &tail -f $HOME/validator.logsolana catchup --our-localhosttachyon-validator --ledger ./ledger monitorps aux | grep validatortachyon-validator exit -frm -rf ledgersolana config set -u https://rpc.mainnet.x1.xyz/nano $HOME/bin/validator.sh### 4. Update Validator Startup Script for Mainnet
Open your validator startup script:
```bash
nano $HOME/bin/validator.sh#!/bin/bash
export RUST_LOG=solana_metrics=warn,info
exec tachyon-validator \
--identity $HOME/.config/solana/identity.json \
--vote-account $HOME/.config/solana/vote.json \
--entrypoint entrypoint0.mainnet.x1.xyz:8001 \
--entrypoint entrypoint1.mainnet.x1.xyz:8001 \
--entrypoint entrypoint2.mainnet.x1.xyz:8001 \
--entrypoint entrypoint3.mainnet.x1.xyz:8001 \
--entrypoint entrypoint4.mainnet.x1.xyz:8001 \
--known-validator 7ufaUVtQKzGu5tpFtii9Cg8kR4jcpjQSXwsF3oVPSMZA \
--known-validator 5Rzytnub9yGTFHqSmauFLsAbdXFbehMwPBLiuEgKajUN \
--known-validator 4V2QkkWce8bwTzvvwPiNRNQ4W433ZsGQi9aWU12Q8uBF \
--known-validator CkMwg4TM6jaSC5rJALQjvLc51XFY5pJ1H9f1Tmu5Qdxs \
--known-validator 7J5wJaH55ZYjCCmCMt7Gb3QL6FGFmjz5U8b6NcbzfoTy \
--only-known-rpc \
--log $HOME/validator.log \
--ledger $HOME/ledger \
--rpc-port 8899 \
--full-rpc-api \
--dynamic-port-range 8000-8020 \
--wal-recovery-mode skip_any_corrupted_record \
--limit-ledger-size 50000000 \
--enable-rpc-transaction-history \
--enable-extended-tx-metadata-storage \
--rpc-pubsub-enable-block-subscription \
--full-snapshot-interval-slots 5000 \
--maximum-incremental-snapshots-to-retain 10 \
--maximum-full-snapshots-to-retain 50
cd ~/.config/solanasolana config set -k ~/.config/solana/id.jsoncd $HOME
nohup $HOME/bin/validator.sh &tail -f $HOME/validator.logsolana stake-account stake.jsonClone the Tachyon repository:
Build Tachyon and the solana tools:
Update your PATH environment variable:
Verify the installation:
If you see the versions listed below, the installation was successful:
Your system will need to be tuned in order to run properly. Your validator may not start without the settings below.
Add
to the [Service] section of your systemd service file, if you use one, otherwise add
to the [Manager] section of /etc/systemd/system.conf, using
Close all open sessions (log out then, in again)
To verify set network, use:
Using command solana-keygen to generate a new wallet. It will generate a 12-word seed (aka. mnemonic, or recovery) phrase. Save it safe.
To switch between keypairs:
Make sure you have XNT in your wallet before you continue.
Check balance:
In your ubuntu home directory (e.g. /home/ubuntu/), create a folder called bin. Inside that folder create a file called validator.sh and make it executable:
Next, open the validator.sh file for editing:
Copy and paste the following contents into validator.sh then save the file:
Make validator startup script executable:
Make sure you're in the home directory:
Start validator with nohup:
Check validator logs to see if it's running:
Check catch up status:
Use monitor command to check validator operations:
See all nodes connected to network, whether they are staked or not. Your identity.json should show up there.
Check validator process:
Kill validator process:
You’ll see something like:
HDD
200–500
❌ Too slow
SATA SSD
5k–10k
⚠️ Weak
Consumer NVMe
50k–100k
✅ Good
Datacenter NVMe
100k–250k+
If both read and write ≥ 20,000 IOPS → ✅ your disk is ready for X1 mainnet validation.
read: IOPS=128k, BW=499MiB/s
write: IOPS=113k, BW=442MiB/sThe Solana CLI provides all the tools required to build and deploy Solana programs.
Install the Solana CLI tool suite using the official install command:
You can replace stable with the release tag matching the software version of your desired release (i.e. v2.0.3), or use one of the three symbolic channel names: stable, beta, or edge.
If it is your first time installing the Solana CLI, you may see the following message prompting you to add a PATH environment variable:
If you are using a Linux or WSL terminal, you can add the PATH environment variable to your shell configuration file by running the command logged from the installation or by restarting your terminal.
To verify that the installation was successful, check the Solana CLI version:
To later update the Solana CLI to the latest version, you can use the following command:
Using command solana-keygen to generate a new wallet. It will generate a 12-word seed (aka. mnemonic, or recovery) phrase. Save it safe.
Use the latest stable rust version:
Check version:
On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, protobuf etc.
Rust installation link for reference:
Install avm using Cargo. Note this will replace your anchor binary if you had one installed:
Install the latest version of the CLI using avm, and then set it to be the version to use:
Verify installation:
Anchor installation link for reference:
programs: <project name> = "program ID" cluster = localnet wallet = path to wallet scripts: how yarn is used to run a test
Change localnet to: https://rpc.testnet.x1.xyz
To verify set network, use:
Verify received airdrop:
Or:
Go back to original <project name> directory, before building and deploying program again.
The byte size is pre-set for initial deployment. You can increase byte size by:
This script runs a full system health report for your X1 validator: it checks CPU, Memory, Disk I/O, Network latency, Clock synchronization, and Vote Account activity — all in one run.
CPU / Load
Ensures node isn’t CPU-bound
Memory / Swap
Copy the full script below into a file named x1_health.sh:
Make it executable:
Run:
Run it as your validator user (not root) for accurate environment checks.
Requires: bc, sysstat, and solana CLI in $PATH.
Install with:
curl https://sh.rustup.rs -sSf | sh
source $HOME/.cargo/env
rustup component add rustfmt rustup updatesudo apt-get update
sudo apt-get install -y git libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang cmake make libprotobuf-dev protobuf-compilergit clone https://github.com/x1-labs/tachyon.gitcd tachyon
cargo build --releaseexport PATH=$PATH:$(pwd)/target/release
echo "export PATH=$PATH:$(pwd)/target/release" >> ~/.bashrcsolana --version
tachyon-validator --versionsolana-cli 2.0.21 (src:00000000; feat:607245837, client:Tachyon)
tachyon-validator 2.0.21 (src:00000000; feat:2908148756, client:Tachyon)sudo bash -c "cat >/etc/sysctl.d/21-tachyon-validator.conf <<EOF
# Increase UDP buffer sizes
net.core.rmem_default = 134217728
net.core.rmem_max = 134217728
net.core.wmem_default = 134217728
net.core.wmem_max = 134217728
# Increase memory mapped files limit
vm.max_map_count = 1000000
# Increase number of allowed open file descriptors
fs.nr_open = 1000000
EOF"sudo sysctl -p /etc/sysctl.d/21-tachyon-validator.confLimitNOFILE=1000000DefaultLimitNOFILE=1000000nano /etc/systemd/system.confsudo systemctl daemon-reloadsudo bash -c "cat >/etc/security/limits.d/90-tachyon-nofiles.conf <<EOF
# Increase process file descriptor count limit
* - nofile 1000000
EOF"solana config set -u https://rpc.mainnet.x1.xyzsolana config getsolana-keygen new --no-passphrase -o ~/.config/solana/id.json
solana-keygen new --no-passphrase -o ~/.config/solana/identity.json
solana-keygen new --no-passphrase -o ~/.config/solana/vote.json
solana-keygen new --no-passphrase -o ~/.config/solana/stake.jsonsolana config set -k id.jsonsolana balancemkdir -p $HOME/bin
touch $HOME/bin/validator.shnano $HOME/bin/validator.sh#!/bin/bash
export RUST_LOG=solana_metrics=warn,info
exec tachyon-validator \
--identity $HOME/.config/solana/identity.json \
--vote-account $HOME/.config/solana/vote.json \
--entrypoint entrypoint0.mainnet.x1.xyz:8001 \
--entrypoint entrypoint1.mainnet.x1.xyz:8001 \
--entrypoint entrypoint2.mainnet.x1.xyz:8001 \
--entrypoint entrypoint3.mainnet.x1.xyz:8001 \
--entrypoint entrypoint4.mainnet.x1.xyz:8001 \
--known-validator 7ufaUVtQKzGu5tpFtii9Cg8kR4jcpjQSXwsF3oVPSMZA \
--known-validator 5Rzytnub9yGTFHqSmauFLsAbdXFbehMwPBLiuEgKajUN \
--known-validator 4V2QkkWce8bwTzvvwPiNRNQ4W433ZsGQi9aWU12Q8uBF \
--known-validator CkMwg4TM6jaSC5rJALQjvLc51XFY5pJ1H9f1Tmu5Qdxs \
--known-validator 7J5wJaH55ZYjCCmCMt7Gb3QL6FGFmjz5U8b6NcbzfoTy \
--only-known-rpc \
--log $HOME/validator.log \
--ledger $HOME/ledger \
--rpc-port 8899 \
--full-rpc-api \
--dynamic-port-range 8000-8020 \
--wal-recovery-mode skip_any_corrupted_record \
--limit-ledger-size 50000000 \
--enable-rpc-transaction-history \
--enable-extended-tx-metadata-storage \
--rpc-pubsub-enable-block-subscription \
--full-snapshot-interval-slots 5000 \
--maximum-incremental-snapshots-to-retain 10 \
--maximum-full-snapshots-to-retain 50chmod +x $HOME/bin/validator.shcd $HOMEnohup $HOME/bin/validator.sh &tail -f $HOME/validator.logsolana catchup --our-localhosttachyon-validator --ledger ./ledger monitorsolana gossipps aux | grep validator tachyon-validator exit -f# === X1 Validator Disk IOPS Quick Test (safe, file-based) ===
# Target: >= 20,000 IOPS for both 4K random reads and writes
set -euo pipefail
export LC_ALL=C
MNT="${MNT:-/var/tmp}"
SIZE_MB="${SIZE_MB:-2048}"
RUNTIME="${RUNTIME:-30}"
BS="${BS:-4k}"
JOBS="${JOBS:-4}"
IODEPTH="${IODEPTH:-32}"
ENGINE_CANDIDATES=("io_uring" "libaio")
PASS_THRESH="${PASS_THRESH:-20000}"
# Install fio automatically if missing
if ! command -v fio >/dev/null 2>&1; then
echo "Installing fio..."
sudo apt-get update -y >/dev/null 2>&1
sudo apt-get install -y fio >/dev/null 2>&1
fi
ENGINE=""
for e in "${ENGINE_CANDIDATES[@]}"; do fio --enghelp 2>/dev/null | grep -q "$e" && ENGINE="$e" && break; done
[ -n "$ENGINE" ] || { echo "ERROR: Neither io_uring nor libaio available in fio."; exit 1; }
[ -d "$MNT" ] || { echo "ERROR: Mountpoint $MNT does not exist."; exit 1; }
[ -w "$MNT" ] || { echo "ERROR: $MNT is not writable."; exit 1; }
TESTDIR="$MNT/x1-fio-test.$$"
mkdir -p "$TESTDIR"
cd "$TESTDIR"
# --- Robust fio IOPS parser (handles commas, k/M suffixes, different formats) ---
run_fio_and_get_iops () {
local NAME="$1" RW="$2"
fio --name="$NAME" \
--rw="$RW" \
--bs="$BS" \
--ioengine="$ENGINE" \
--direct=1 \
--group_reporting=1 \
--time_based=1 \
--runtime="$RUNTIME" \
--numjobs="$JOBS" \
--iodepth="$IODEPTH" \
--size="${SIZE_MB}m" \
--filename="$TESTDIR/fiofile.bin" \
--random_generator=tausworthe64 \
--fsync_on_close=1 \
--output="$TESTDIR/${NAME}.out" >/dev/null
# Try the headline "read/write: IOPS=..." line first, then fallback to the "iops : avg=..." line.
awk '
BEGIN{ OFMT="%.0f"; want=(tolower("'"$RW"'")=="randread" ? "read" : "write"); val=""; }
{
line=tolower($0)
# Primary match: "read: IOPS=128k" or "write: IOPS=128,532"
if (line ~ "^"want": " && line ~ /(iops|IOPS)=/) {
if (match($0, /(IOPS|iops)=([0-9][0-9,\.]*)([kKmM]?)/, m)) {
gsub(/,/, "", m[2]); # remove thousands commas
num=m[2]+0
suf=tolower(m[3])
if (suf=="k") num=num*1000
else if (suf=="m") num=num*1000000
printf("%.0f\n", num); exit
}
}
# Fallback: summary line "iops : avg=115058.64"
if (line ~ /iops[[:space:]]*:[[:space:]]*.*avg=/) {
if (match($0, /avg=([0-9][0-9,\.]*)/, m2)) {
gsub(/,/, "", m2[1])
num=m2[1]+0
printf("%.0f\n", num); exit
}
}
}
END{
if (NR>0) {} # no-op
}
' "$TESTDIR/${NAME}.out"
}
echo "Running 4K random READ test for ${RUNTIME}s..."
READ_IOPS="$(run_fio_and_get_iops "x1-randread" "randread" || true)"
READ_IOPS="${READ_IOPS:-0}"
echo "READ IOPS: $READ_IOPS"
echo "Running 4K random WRITE test for ${RUNTIME}s..."
WRITE_IOPS="$(run_fio_and_get_iops "x1-randwrite" "randwrite" || true)"
WRITE_IOPS="${WRITE_IOPS:-0}"
echo "WRITE IOPS: $WRITE_IOPS"
# Compare safely even if value is empty
passfail () {
local val="${1:-0}" kind="$2"
if [ "${val:-0}" -ge "${PASS_THRESH:-20000}" ]; then
echo "PASS ($kind ≥ ${PASS_THRESH})"
else
echo "FAIL ($kind < ${PASS_THRESH})"
fi
}
READ_VERDICT=$(passfail "$READ_IOPS" "read")
WRITE_VERDICT=$(passfail "$WRITE_IOPS" "write")
echo "==========================================="
echo "X1 Validator Disk IOPS Results (4K random)"
echo " Read IOPS : $READ_IOPS => $READ_VERDICT"
echo " Write IOPS: $WRITE_IOPS => $WRITE_VERDICT"
echo " Threshold : ${PASS_THRESH} IOPS"
echo " Params : jobs=$JOBS iodepth=$IODEPTH bs=$BS engine=$ENGINE size=${SIZE_MB}MB runtime=${RUNTIME}s"
echo "Logs in : $TESTDIR"
echo "==========================================="
echo ""
echo "=== fio raw output summary ==="
grep -i "iops" "$TESTDIR"/x1-randread.out || true
grep -i "iops" "$TESTDIR"/x1-randwrite.out || true
echo "=============================="
echo "Done."curl --proto '=https' --tlsv1.2 -sSfL https://raw.githubusercontent.com/solana-developers/solana-install/main/install.sh | bashInstalled Versions:
Rust: rustc 1.84.1 (e71f9a9a9 2025-01-27)
Solana CLI: solana-cli 2.0.26 (src:3dccb3e7; feat:607245837, client:Agave)
Anchor CLI: anchor-cli 0.30.1
Node.js: v23.7.0
Yarn: 1.22.1sudo apt-get update && sudo apt-get upgrade && sudo apt-get install -y pkg-config build-essential libudev-dev libssl-dev npmsudo npm install -g yarnsh -c "$(curl -sSfL https://release.anza.xyz/stable/install)"Close and reopen your terminal to apply the PATH changes or run the following in your existing shell:
export PATH="/Users/test/.local/share/solana/install/active_release/bin:$PATH"export PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"solana-cli 2.0.26 (src:3dccb3e7; feat:607245837, client:Agave)agave-install updatesolana-keygen new --no-passphrase -o ~/.config/solana/id.jsonsolana config set -k ~/.config/solana/id.jsoncurl https://sh.rustup.rs -sSf | sh
source $HOME/.cargo/envrustup updatecargo -Vsudo apt-get update
sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang cmake make libprotobuf-dev protobuf-compilercargo install --git https://github.com/coral-xyz/anchor avm --locked --forceavm install latest
avm use latestanchor --versionanchor init <project name>cd <project name>cat Anchor.tomlnano Anchor.tomlsolana config set -u https://rpc.testnet.x1.xyzsolana config getsolana balanceanchor buildanchor testsolana confirm -v <tx hash>cd programs/<project name>/srcnano lib.rssolana program extend <program id> 15000💎 Excellent
Detects paging or low RAM
Disk I/O
Measures NVMe read/write latency
Network
Measures ping to rpc.mainnet.x1.xyz
NTP / Clock
Verifies system time sync
Vote Account
Confirms validator is actively voting
nano x1_health.shchmod +x x1_health.sh./x1_health.sh#!/bin/bash
# === X1 Validator Health Report ===
# Checks CPU, Memory, Disk I/O, Network, NTP, and Vote Activity
set -euo pipefail
echo "=============================================="
echo "🧠 X1 Validator Health Report"
echo "=============================================="
echo ""
# --- CPU & Load ---
echo "🔹 CPU & Thread Load"
echo " → Measures overall CPU load and idle capacity."
CPU_MODEL=$(lscpu | grep "Model name" | awk -F: '{print $2}' | sed 's/^[ \t]*//')
LOAD=$(uptime | awk -F'load average:' '{print $2}' | xargs)
CPU_LINE=$(top -bn1 | grep "%Cpu" | head -1)
CPU_IDLE=$(echo "$CPU_LINE" | awk '{for(i=1;i<=NF;i++){if($i=="id,"){print $(i-1)}}}' | sed 's/,//')
CPU_IDLE=${CPU_IDLE:-$(echo "$CPU_LINE" | awk '{print $8}' | sed 's/,//')}
echo "CPU Model: $CPU_MODEL"
echo "Load Average: $LOAD"
echo "CPU Idle: ${CPU_IDLE:-unknown}%"
echo ""
# --- Memory ---
echo "🔹 Memory"
echo " → Shows total, used, and cached memory (checks swap usage)."
free -h | awk 'NR==1 || NR==2 {print}'
SWAP_USED=$(free -m | awk '/Swap/ {print $3}')
echo "Swap used: ${SWAP_USED} MB"
echo ""
# --- Network ---
echo "🔹 Network latency to X1 RPC"
echo " → Tests latency and packet loss to rpc.mainnet.x1.xyz."
PING_OUT=$(ping -c5 rpc.mainnet.x1.xyz)
PING_AVG=$(echo "$PING_OUT" | grep "rtt" | awk -F'/' '{print $5}')
echo "$PING_OUT" | grep "packets transmitted"
echo "Average ping: ${PING_AVG} ms"
echo ""
# --- Disk I/O (accurate NVMe parser) ---
echo "🔹 Disk I/O (5s sample)"
echo " → Monitors live NVMe latency and utilization during validator operation."
IOSTAT_OUT=$(iostat -xm 1 5 | awk '/nvme/ {line=$0} END{print line}')
if [ -n "$IOSTAT_OUT" ]; then
echo "$IOSTAT_OUT"
R_LAT=$(echo "$IOSTAT_OUT" | awk '{print $10}')
W_LAT=$(echo "$IOSTAT_OUT" | awk '{print $14}')
UTIL=$(echo "$IOSTAT_OUT" | awk '{print $NF}')
else
R_LAT="N/A"
W_LAT="N/A"
UTIL="N/A"
echo "(No NVMe device detected — skipping latency check)"
fi
echo ""
echo "Avg Read Latency: ${R_LAT:-N/A} ms"
echo "Avg Write Latency: ${W_LAT:-N/A} ms"
echo "Disk Utilization: ${UTIL:-N/A} %"
echo ""
# --- NTP / Clock ---
echo "🔹 NTP / Clock sync"
echo " → Verifies system clock and NTP synchronization status."
timedatectl status | grep -E "System clock|NTP service|synchronized|Time zone"
echo ""
# --- Vote Account Status ---
echo "🔹 Vote Account Activity"
echo " → Confirms validator vote-account is active and submitting votes."
DEFAULT_VOTE_PATH="$HOME/.config/solana/vote-account.json"
if [ -f "$DEFAULT_VOTE_PATH" ]; then
VOTE_ADDR=$(solana address -k "$DEFAULT_VOTE_PATH" 2>/dev/null || true)
else
VOTE_FILE=$(find "$HOME/.config/solana" -maxdepth 1 -type f -name "*vote*.json" | head -n1 || true)
if [ -n "$VOTE_FILE" ]; then
VOTE_ADDR=$(solana address -k "$VOTE_FILE" 2>/dev/null || true)
else
VOTE_ADDR=""
fi
fi
if [ -n "$VOTE_ADDR" ]; then
echo "Detected vote account: $VOTE_ADDR"
VOTE_INFO=$(solana vote-account "$VOTE_ADDR" 2>/dev/null || true)
ROOT_SLOT=$(echo "$VOTE_INFO" | awk '/Root Slot:/ {print $3; exit}')
LAST_VOTE=$(echo "$VOTE_INFO" | awk '/^- slot:/ {print $3; exit}')
EPOCH_NUM=$(echo "$VOTE_INFO" | awk '/^- epoch:/ {print $3; exit}')
echo "Root Slot: ${ROOT_SLOT:-N/A}"
echo "Last Vote Slot: ${LAST_VOTE:-N/A}"
echo "Epoch: ${EPOCH_NUM:-N/A}"
if [[ "$LAST_VOTE" =~ ^[0-9]+$ ]]; then
echo "Vote activity: ✅ Active (recent votes found)"
VOTE_STATUS="✅ Voting active"
else
echo "Vote activity: ⚠️ No recent votes found"
VOTE_STATUS="⚠️ Voting inactive"
fi
else
echo "⚠️ No vote-account.json found in ~/.config/solana/"
VOTE_STATUS="⚠️ No vote account detected"
fi
echo ""
# --- Summary ---
echo "=============================================="
echo "✅ SUMMARY REPORT"
echo "=============================================="
if (( $(echo "$CPU_IDLE < 30" | bc -l) )); then CPU_STATUS="⚠️ High CPU usage"; else CPU_STATUS="✅ CPU OK"; fi
if (( SWAP_USED > 200 )); then MEM_STATUS="⚠️ Swap in use"; else MEM_STATUS="✅ Memory OK"; fi
if (( $(echo "$PING_AVG > 50" | bc -l) )); then NET_STATUS="⚠️ High latency"; else NET_STATUS="✅ Network OK"; fi
if [[ "$R_LAT" =~ ^[0-9.]+$ ]] && (( $(echo "$R_LAT > 2" | bc -l) )) || [[ "$W_LAT" =~ ^[0-9.]+$ ]] && (( $(echo "$W_LAT > 2" | bc -l) )); then
DISK_STATUS="⚠️ Slow disk I/O"
else
DISK_STATUS="✅ Disk OK"
fi
if timedatectl status | grep -q "synchronized: yes"; then CLOCK_STATUS="✅ Clock synced"; else CLOCK_STATUS="⚠️ NTP not synced"; fi
printf "%-15s %s\n" "CPU:" "$CPU_STATUS"
printf "%-15s %s\n" "Memory:" "$MEM_STATUS"
printf "%-15s %s\n" "Network:" "$NET_STATUS"
printf "%-15s %s\n" "Disk I/O:" "$DISK_STATUS"
printf "%-15s %s\n" "Clock:" "$CLOCK_STATUS"
printf "%-15s %s\n" "Vote:" "$VOTE_STATUS"
echo ""
echo "Done."
echo "=============================================="🧠 X1 Validator Health Report
CPU Model: AMD Ryzen 9 9900X 12-Core Processor
Load Average: 5.62, 5.58, 5.47
CPU Idle: 78.3%
Avg Read Latency: 0.15 ms
Avg Write Latency: 0.23 ms
Disk Utilization: 8.3 %
🔹 Vote Account Activity
Detected vote account: 7oTGUhJt72GgGczT5KzQsqEcnuiHz8Wd9Wo5ZsKmR4hX
Root Slot: 6163567
Last Vote Slot: 6163611
Epoch: 40
Vote activity: ✅ Active (recent votes found)
==============================================
✅ SUMMARY REPORT
==============================================
CPU: ✅ CPU OK
Memory: ✅ Memory OK
Network: ✅ Network OK
Disk I/O: ✅ Disk OK
Clock: ✅ Clock synced
Vote: ✅ Voting active
Done.
==============================================sudo apt install -y bc sysstat
Test your validator's bandwidth: download/upload speeds, live throughput, and RPC connectivity.
Download Speed
Measures max download capacity (Mbps)
Upload Speed
Based on production validators (Nov 2025):
Average usage: 150-200 Mbps sustained
Peak usage: 300-700 Mbps during high activity
Growth rate: Doubling every 1-2 months
Network trend: Approaching 1 Gbps per validator
Bottom line: 1 Gbps minimum today, plan for 10 Gbps within 6-12 months.
Your ISP might advertise "1 Gbps Unmetered" but actually provide:
Run this bandwidth test — Shows maximum capacity
Check your hosting provider's graphs — Shows actual usage vs thresholds
Example: InterServer "1 Gbps Unmetered" was actually 200 Mbps committed with red thresholds at ~180-200 Mbps.
"What is my committed information rate (CIR)?"
"Is this sustained bandwidth or burst?"
"What happens if my 95th percentile exceeds X Mbps?"
"Can I sustain 500-1000 Mbps continuously?"
Install dependencies:
Copy the full script below into a file named x1_bandwidth.sh:
Make it executable:
Run:
Action: You're good. Monitor trends monthly.
Action: Plan upgrade to 1 Gbps committed soon.
Action: Upgrade immediately. Asymmetric connection will cause issues.
Add to cron for weekly reports:
Don't just rely on speedtest — check your hosting provider's actual usage graphs:
Look for 95th percentile metrics
Check for red warning thresholds
Compare current usage vs thresholds
Monitor growth trends month-over-month
Q: I have "1 Gbps" but speedtest shows 800 Mbps. Why?
A: TCP/IP overhead, network congestion, or provider throttling. Also verify if it's committed vs burst.
Q: My upload is lower than download. Is that OK?
A: No. Validators need symmetric bandwidth. Upgrade to fiber with equal upload/download.
Q: I'm at 60% utilization. Should I upgrade?
A: Yes. X1 network usage is growing. Upgrade before you hit 80-90%.
Q: What's the difference between committed and burst?
A: Committed = sustained 24/7. Burst = temporary spikes allowed. Validators need committed.
Q: Can I run a validator on cable internet?
A: No. Cable is asymmetric (low upload). You need symmetric fiber.
Run during normal validator operation to see real-world throughput alongside capacity tests.
Speedtest results may vary — run multiple times for accurate average.
For continuous monitoring, consider setting up a cron job to log results.
Requires: speedtest-cli, bc
Latency
<20ms
<5ms
To RPC endpoints
curlethtoolMeasures max upload capacity (Mbps)
Interface Stats
Shows current throughput on network adapter
RPC Connectivity
Tests latency to rpc.mainnet.x1.xyz
Recommendations
Analyzes if bandwidth meets validator needs
Speed
1 Gbps symmetric
10 Gbps symmetric
Upload must equal download
Type
Committed/sustained
Committed/sustained
NOT burst capacity
Data Cap
Unmetered
Unmetered
No monthly limits
Port Speed: 1 Gbps (can burst to this temporarily)
Committed Rate: 200-500 Mbps (what you can sustain 24/7)
Billing: 95th percentile on committed ratesudo apt update
sudo apt install -y speedtest-cli bc curl ethtoolnano x1_bandwidth.shchmod +x x1_bandwidth.sh./x1_bandwidth.sh#!/bin/bash
# === X1 Validator Bandwidth Test ===
# Tests download/upload speed, interface throughput, and RPC connectivity
# Author: X1 Labs / Xen Tzu
set -euo pipefail
echo "=============================================="
echo "🌐 X1 Validator Bandwidth Test"
echo "=============================================="
echo ""
# --- Check for speedtest-cli ---
if ! command -v speedtest-cli &> /dev/null; then
echo "⚠️ speedtest-cli not found. Installing..."
sudo apt update && sudo apt install -y speedtest-cli
echo ""
fi
# --- Detect primary network interface ---
echo "🔹 Network Interface"
IFACE=$(ip route | grep default | awk '{print $5}' | head -1)
if [ -z "$IFACE" ]; then
echo "⚠️ Could not detect network interface"
IFACE="eth0"
fi
echo " Interface: $IFACE"
# Get interface speed capability
LINK_SPEED=$(ethtool "$IFACE" 2>/dev/null | grep "Speed:" | awk '{print $2}' || echo "Unknown")
echo " Link Speed: $LINK_SPEED"
echo ""
# --- Current Interface Throughput (5s sample) ---
echo "🔹 Current Throughput (5-second sample)"
echo " Measuring live traffic on $IFACE..."
# Get initial bytes
RX1=$(cat /sys/class/net/"$IFACE"/statistics/rx_bytes)
TX1=$(cat /sys/class/net/"$IFACE"/statistics/tx_bytes)
sleep 5
# Get final bytes
RX2=$(cat /sys/class/net/"$IFACE"/statistics/rx_bytes)
TX2=$(cat /sys/class/net/"$IFACE"/statistics/tx_bytes)
# Calculate Mbps
RX_MBPS=$(awk "BEGIN {printf \"%.2f\", ($RX2 - $RX1) * 8 / 5 / 1000000}")
TX_MBPS=$(awk "BEGIN {printf \"%.2f\", ($TX2 - $TX1) * 8 / 5 / 1000000}")
echo " Current Download: ${RX_MBPS} Mbps"
echo " Current Upload: ${TX_MBPS} Mbps"
echo ""
# --- Speedtest (Maximum Capacity) ---
echo "🔹 Speedtest (Maximum Capacity)"
echo " Testing connection speed (this takes ~30 seconds)..."
SPEEDTEST_OUTPUT=$(speedtest-cli --simple 2>&1)
PING=$(echo "$SPEEDTEST_OUTPUT" | grep "Ping:" | awk '{print $2}')
DOWNLOAD=$(echo "$SPEEDTEST_OUTPUT" | grep "Download:" | awk '{print $2}')
UPLOAD=$(echo "$SPEEDTEST_OUTPUT" | grep "Upload:" | awk '{print $2}')
echo " Ping: ${PING} ms"
echo " Download: ${DOWNLOAD} Mbps"
echo " Upload: ${UPLOAD} Mbps"
echo ""
# --- RPC Connectivity ---
echo "🔹 RPC Connectivity"
echo " Testing connection to rpc.mainnet.x1.xyz..."
RPC_START=$(date +%s%N)
RPC_STATUS=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 https://rpc.mainnet.x1.xyz || echo "000")
RPC_END=$(date +%s%N)
RPC_LATENCY=$(awk "BEGIN {printf \"%.3f\", ($RPC_END - $RPC_START) / 1000000}")
if [ "$RPC_STATUS" = "200" ] || [ "$RPC_STATUS" = "405" ]; then
echo " RPC Latency: ${RPC_LATENCY} ms"
echo " RPC Status: ✅ Connected"
RPC_OK=true
else
echo " RPC Status: ❌ Failed (HTTP $RPC_STATUS)"
RPC_OK=false
fi
echo ""
# --- Analysis ---
echo "=============================================="
echo "📊 BANDWIDTH ANALYSIS"
echo "=============================================="
# Convert speeds to numbers for comparison
DL_NUM=$(echo "$DOWNLOAD" | cut -d. -f1)
UP_NUM=$(echo "$UPLOAD" | cut -d. -f1)
RPC_LAT_NUM=$(echo "$RPC_LATENCY" | cut -d. -f1)
# Analyze download
if [ "$DL_NUM" -ge 1000 ]; then
DL_STATUS="✅ Excellent (1Gbps+)"
elif [ "$DL_NUM" -ge 500 ]; then
DL_STATUS="⚠️ Acceptable (500Mbps+)"
else
DL_STATUS="❌ Insufficient (<500Mbps)"
fi
# Analyze upload
if [ "$UP_NUM" -ge 1000 ]; then
UP_STATUS="✅ Excellent (1Gbps+)"
elif [ "$UP_NUM" -ge 500 ]; then
UP_STATUS="⚠️ Acceptable (500Mbps+)"
else
UP_STATUS="❌ Insufficient (<500Mbps)"
fi
# Analyze RPC latency
if [ "$RPC_LAT_NUM" -lt 20 ]; then
LAT_STATUS="✅ Excellent (<20ms)"
elif [ "$RPC_LAT_NUM" -lt 50 ]; then
LAT_STATUS="⚠️ Acceptable (<50ms)"
else
LAT_STATUS="❌ High (>50ms)"
fi
echo "Download Capacity: $DL_STATUS"
echo "Upload Capacity: $UP_STATUS"
echo "RPC Latency: $LAT_STATUS"
if $RPC_OK; then
echo "RPC Connection: ✅ OK"
else
echo "RPC Connection: ❌ Failed"
fi
echo ""
# --- Recommendations ---
echo "=============================================="
echo "📋 RECOMMENDATIONS"
echo "=============================================="
# Check if symmetric
SYMMETRIC=false
DIFF=$((DL_NUM - UP_NUM))
if [ ${DIFF#-} -lt 100 ]; then
SYMMETRIC=true
fi
if [ "$DL_NUM" -ge 1000 ] && [ "$UP_NUM" -ge 1000 ]; then
echo "✅ Your bandwidth meets Solana/X1 minimum requirements."
echo ""
echo " Current capacity: ${DOWNLOAD}/${UPLOAD} Mbps"
echo " Minimum required: 1000/1000 Mbps (symmetric)"
echo ""
if [ "$DL_NUM" -lt 10000 ]; then
echo "💡 Consider upgrading to 10 Gbps for future growth."
echo " X1 network usage is increasing and may approach"
echo " 1 Gbps per validator within 6-12 months."
fi
elif [ "$DL_NUM" -ge 500 ] && [ "$UP_NUM" -ge 500 ]; then
echo "⚠️ Your bandwidth is below recommended levels."
echo ""
echo " Current capacity: ${DOWNLOAD}/${UPLOAD} Mbps"
echo " Minimum required: 1000/1000 Mbps (symmetric)"
echo ""
echo "📌 UPGRADE RECOMMENDED:"
echo " • Contact your ISP for 1 Gbps symmetric upgrade"
echo " • Ask specifically for COMMITTED rate, not burst"
echo " • Verify it's truly symmetric (upload = download)"
else
echo "❌ Your bandwidth is insufficient for validator operations."
echo ""
echo " Current capacity: ${DOWNLOAD}/${UPLOAD} Mbps"
echo " Minimum required: 1000/1000 Mbps (symmetric)"
echo ""
echo "🚨 URGENT UPGRADE NEEDED:"
echo " • Current bandwidth may cause missed votes"
echo " • Upgrade to at least 1 Gbps symmetric immediately"
echo " • Consider 10 Gbps for optimal performance"
fi
# Check symmetry - only warn if upload is below 1 Gbps
if ! $SYMMETRIC && [ "$UP_NUM" -lt 1000 ]; then
echo ""
echo "⚠️ ASYMMETRIC CONNECTION DETECTED"
echo " Download: ${DOWNLOAD} Mbps"
echo " Upload: ${UPLOAD} Mbps"
echo ""
echo " Validators require SYMMETRIC bandwidth (equal up/down)."
echo " Contact your ISP for symmetric fiber connection."
fi
# Current usage vs capacity
echo ""
echo "=============================================="
echo "📈 USAGE vs CAPACITY"
echo "=============================================="
echo "Current Usage: ${RX_MBPS} Mbps down / ${TX_MBPS} Mbps up"
echo "Max Capacity: ${DOWNLOAD} Mbps down / ${UPLOAD} Mbps up"
USAGE_PERCENT=$(awk "BEGIN {printf \"%.1f\", ($RX_MBPS / $DOWNLOAD) * 100}")
echo "Utilization: ${USAGE_PERCENT}%"
echo ""
if (( $(echo "$USAGE_PERCENT > 70" | bc -l) )); then
echo "⚠️ HIGH UTILIZATION (>70%)"
echo " You're using most of your available bandwidth."
echo " Consider upgrading before you hit capacity limits."
elif (( $(echo "$USAGE_PERCENT > 50" | bc -l) )); then
echo "💡 MODERATE UTILIZATION (50-70%)"
echo " Monitor your usage trends. Plan upgrade if growing."
else
echo "✅ GOOD HEADROOM (<50%)"
echo " Plenty of capacity available for growth."
fi
# Final critical check
echo ""
echo "=============================================="
echo "⚠️ CRITICAL: Verify With Your Provider"
echo "=============================================="
echo ""
echo "This test shows your PORT SPEED (burst capacity)."
echo "Your ISP may have a lower COMMITTED RATE."
echo ""
echo "🔍 Next Steps:"
echo " 1. Check your hosting provider's bandwidth graphs"
echo " 2. Look for 'red threshold lines' around 200-500 Mbps"
echo " 3. Ask your ISP: 'What is my committed information rate?'"
echo " 4. Verify sustained usage vs provider thresholds"
echo ""
echo "Example: '1 Gbps Unmetered' might mean:"
echo " • Port: 1000 Mbps (burst)"
echo " • Committed: 200 Mbps (sustained)"
echo " • Billing: 95th percentile"
echo ""
echo "=============================================="
echo "Test Complete"
echo "=============================================="==============================================
🌐 X1 Validator Bandwidth Test
==============================================
🔹 Network Interface
Interface: enp4s0
Link Speed: 1000Mb/s
🔹 Current Throughput (5-second sample)
Current Download: 189.45 Mbps
Current Upload: 142.67 Mbps
🔹 Speedtest (Maximum Capacity)
Ping: 2.26 ms
Download: 814.00 Mbps
Upload: 766.28 Mbps
🔹 RPC Connectivity
RPC Latency: 12.345 ms
RPC Status: ✅ Connected
==============================================
📊 BANDWIDTH ANALYSIS
==============================================
Download Capacity: ⚠️ Acceptable (500Mbps+)
Upload Capacity: ⚠️ Acceptable (500Mbps+)
RPC Latency: ✅ Excellent (<20ms)
RPC Connection: ✅ OK
==============================================
📋 RECOMMENDATIONS
==============================================
⚠️ Your bandwidth is below recommended levels.
Current capacity: 814.00/766.28 Mbps
Minimum required: 1000/1000 Mbps (symmetric)
📌 UPGRADE RECOMMENDED:
• Contact your ISP for 1 Gbps symmetric upgrade
• Ask specifically for COMMITTED rate, not burst
• Verify it's truly symmetric (upload = download)
==============================================
📈 USAGE vs CAPACITY
==============================================
Current Usage: 189.45 Mbps down / 142.67 Mbps up
Max Capacity: 814.00 Mbps down / 766.28 Mbps up
Utilization: 23.3%
✅ GOOD HEADROOM (<50%)
Plenty of capacity available for growth.
==============================================
⚠️ CRITICAL: Verify With Your Provider
==============================================
This test shows your PORT SPEED (burst capacity).
Your ISP may have a lower COMMITTED RATE.
🔍 Next Steps:
1. Check your hosting provider's bandwidth graphs
2. Look for 'red threshold lines' around 200-500 Mbps
3. Ask your ISP: 'What is my committed information rate?'
4. Verify sustained usage vs provider thresholds
Example: '1 Gbps Unmetered' might mean:
• Port: 1000 Mbps (burst)
• Committed: 200 Mbps (sustained)
• Billing: 95th percentile
==============================================
Test Complete
==============================================Download: 1200 Mbps
Upload: 1150 Mbps
Symmetric: Yes
Utilization: 35%Download: 850 Mbps
Upload: 780 Mbps
Symmetric: Yes
Utilization: 55%Download: 950 Mbps
Upload: 180 Mbps
Symmetric: No
Utilization: 85%# Run bandwidth test every Sunday at 3 AM
0 3 * * 0 /home/validator/x1_bandwidth.sh >> /var/log/x1_bandwidth.log 2>&1sudo apt install -y speedtest-cli bc curl ethtool