Welcome to Cocoon — the Confidential Compute Open Network
Cocoon is a decentralized network for executing AI inference securely and privately.
In this network, app developers reward GPU owners with TON for processing inference requests.
Telegram will be the first major customer to use Cocoon for confidential AI queries — and will invest heavily in promoting the network across its global ecosystem.
App developers who want to run inference through Cocoon are invited to contact us via DMs to this channel.
Please specify which model architecture you plan to use (e.g., DeepSeek, Qwen), along with your expected daily query volume and average input/output token size.
GPU owners who want to earn TON by contributing compute power can also message this channel using the button below.
Please indicate how many GPUs you can provide and include details such as type (e.g., H200), VRAM, and expected uptime.
Cocoon is ready — launching in November, once we’ve gathered your applications.
Welcome to Cocoon — the Confidential Compute Open Network
Cocoon is a decentralized network for executing AI inference securely and privately.
In this network, app developers reward GPU owners with TON for processing inference requests.
Telegram will be the first major customer to use Cocoon for confidential AI queries — and will invest heavily in promoting the network across its global ecosystem.
App developers who want to run inference through Cocoon are invited to contact us via DMs to this channel.
Please specify which model architecture you plan to use (e.g., DeepSeek, Qwen), along with your expected daily query volume and average input/output token size.
GPU owners who want to earn TON by contributing compute power can also message this channel using the button below.
Please indicate how many GPUs you can provide and include details such as type (e.g., H200), VRAM, and expected uptime.
Cocoon is ready — launching in November, once we’ve gathered your applications.