<aside>
š” Async Highlights:
Pros:
You can use Rust asynchronous tasks to interleave many independent activities on a single thread or a pool of worker threads.
The memory overhead inherent in the use of tasks is much less signiļ¬cant than that of threads.
It is perfectly feasible to have hundreds of thousands of asynchronous tasks running simultaneously in a single program.
Operations that might block, like I/O or acquiring mutexes, need to be handled a bit diļ¬erently.
Cons:
yield_now and spawn_blocking</aside>
<aside>
š” Important
await expressions depend on the ability to resume, you can only use them inside async functions.sync function (like main, for example) using async_stdās task::block_on function, which takes a future and polls it until it produces a value.block_on within an function: **it would block the entire thread until the value is ready. Use await instead.
*** task::spawn_local function takes a future and adds it to a pool that will try polling whenever the future itās blocking on isnāt ready.task::spawn spawn a future onto a pool of worker threads dedicated to polling futures that are ready to make progress.</aside>
You can use Rust asynchronous tasks to interleave many independent activities on a single thread or a pool of worker threads.
Asynchronous tasks are similar to threads, but are much quicker to create, pass control amongst themselves more eļ¬iciently, and have memory overhead an order of magnitude less than that of a thread.
It is perfectly feasible to have hundreds of thousands of asynchronous tasks running simultaneously in a single program. Of course, your application may still be limited by other factors like network bandwidth, database speed, computation, or the workās inherent memory requirements, but the memory overhead inherent in the use of tasks is much less signiļ¬cant than that of threads.
Generally, asynchronous Rust code looks very much like ordinary multithreaded code, except that operations that might block, like I/O or acquiring mutexes, need to be handled a bit diļ¬erently.
Creating a thread for each incoming connection
use std::{net, thread};
let listener = net::TcpListener::bind(address)?;
for socket_result in listener.incoming() {
let socket = socket_result?;
let groups = chat_group_table.clone();
thread::spawn(|| {
log_error(serve(socket, groups));
});
}
For each new connection, this spawns a fresh thread
running the serve function, which is able to focus on
managing a single connectionās needs.
Creating a task for each incoming connection
use async_std::{net, task};
let listener = net::TcpListener::bind(address).await?;
let mut new_connections = listener.incoming();
while let Some(socket_result) = new_connections.next().await {
let socket = socket_result?;
let groups = chat_group_table.clone();
task::spawn(async {
log_error(serve(socket, groups).await);
});
}
This uses the async_std crateās networking and task modules and adds .await after the calls that may block. But the overall structure is the same as the thread-based version.
This is a single-threaded program (not concurrent) that makes an HTTP request and returns it.
use std::io::prelude::*;
use std::net;
fn cheapo_request(host: &str, port: u16, path: &str)
-> std::io::Result<String>
{
let mut socket = net::TcpStream::connect((host, port))?;
let request = format!("GET {} HTTP/1.1\\r\\nHost: {}\\r\\n\\r\\n",
path, host);
socket.write_all(request.as_bytes())?;
socket.shutdown(net::Shutdown::Write)?;
let mut response = String::new();
socket.read_to_string(&mut response)?;
Ok(response)
}

This diagram shows how the function call stack behaves as time runs from left to right. Each function call is a box, placed atop its caller. Obviously, the cheapo_request function runs throughout the entire execution.
These call other functions in turn, but eventually the program makes system calls, requests to the operating system to actually get something done, like open a TCP connection, or read or write some data.