Using Rust to develop
web applications
An exploration of the Rust ecosystem
Who’s this guy?
Sylvain Wallez - @bluxte
Tech lead - Elastic Cloud
Previously tech lead, CTO, architect, trainer, developer…
...at OVH, Actoboard, Sigfox, Scoop.it, Joost, Anyware
Member of the Apache Software Foundation since 2003
- we’re hiring!
On the menu
● Rust for webapps? Why?
● Architecture
● Handling http requests
● Database access
● Logs & metrics
● Docker-ization
● Conclusion
How it all started
#3 on HackerNews
“Wait – how does this work in Rust,
this “other” recent low-level language?”
“Ooooh, Rust is sweet!”
“Can we do webapps with Rust?”
“Yes, we can! And it’s quite nice!”
Rust
“Rust is a systems programming language that runs blazingly fast, prevents
segfaults, and guarantees thread safety” – https://www.rust-lang.org/
● Zero cost abstractions, threads without data races, minimal runtime
● No garbage collector, guaranteed memory safety
● Type inference, traits, pattern matching, type classes, higher-order functions
Learning Rust
Also online at https://www.rust-lang.org/
The Rust ecosystem
● crates.io – there’s a crate for that!
● Twitter: @rustlang, @ThisWeekInRust
● https://users.rust-lang.org
● https://exercism.io/
● http://www.arewewebyet.org/
● http://arewegameyet.com/
● https://areweideyet.com/
● http://www.arewelearningyet.com/
The project: my blog’s comment server
It’s in Python, let’s rewrite it in Rust
Small, but covers a lot:
● web api, data validation, CORS
● database access
● markdown, rendering, HTML sanitization
● sending emails
● admin front-end
Code at https://github.com/swallez/risso
Architecture
Web server
actix-web
tokio
serde
API
config
tokio-threadpool
futures
Persistence
diesel
r2d2
Markup
pulldown-cmark
html5ever
ammonia
Email
lettre
lettre_email
Observability
logs
slogs
prometheus
Project layout
Cargo.toml
[package]
name = "risso_actix"
version = "0.1.0"
description = "Actix-web server front-end to risso_api"
authors = ["Sylvain Wallez <sylvain@bluxte.net>"]
license = "Apache-2.0"
[dependencies]
actix = "0.7"
actix-web = "0.7"
actix-web-requestid = "0.1.2"
serde = "1.0.80"
serde_derive = "1.0.80"
failure = "0.1.3"
lazy_static ="1.1"
maplit = "1.0.1"
intern = "0.2.0"
futures = "0.1"
log = "0.4.6"
env_logger = "0.5.13"
slog = "2.4.1"
slog-term = "2.4.0"
slog-json = "2.2.0"
slog-async = "2.3.0"
slog-scope = "4.0.1"
prometheus = "0.4.2"
risso_api = { path = "../risso_api" }
main
pub fn main() -> Result<(), failure::Error> {
info!("Starting...");
let config = risso_api::CONFIG.get::<ActixConfig>("actix")?;
let listen_addr = config.listen_addr;
let allowed_origins = config.allowed_origins;
let api_builder = ApiBuilder::new()?;
let api = api_builder.build();
let srv = server::new(move || {
App::with_state(api.clone())
.route("/", Method::GET, fetch)
.route("/new", Method::POST, new_comment)
.route("/id/{id}", Method::GET, view)
.route("/id/{id}/unsubscribe/{email}/{key}", Method::GET, unsubscribe)
.route("/metrics", Method::GET, metrics::handler)
// ...
.middleware(build_cors(&allowed_origins))
});
srv.bind(listen_addr)?.run();
Ok(())
}
config
[actix]
listen_addr = "127.0.0.1:8080"
allowed_origins = []
[database]
db_path = "data/comments.db"
min_connections = 1
max_connections = 10
#[derive(Deserialize)]
pub struct ActixConfig {
listen_addr: String,
allowed_origins: Vec<String>,
}
#[derive(Deserialize)]
struct ContextConfig {
db_path: String,
min_connections: u32,
max_connections: u32,
}
???serde
config.toml
serde: swiss-army knife of data serialization
Macros and traits that generate key/value (de)constructors.
Libraries providing (de)constructors for specific serialization formats
Any data structure
JSON, CBOR, YAML, MessagePack, TOML, GOB, Pickle,
RON, BSON, Avro, URL x-www-form-urlencoded, XML,
env vars, AWS Parameter Store and many more...
config
pub fn load_config() -> Result<Config, ConfigError> {
let mut s = Config::new();
// Load defaults
s.merge(File::from_str(include_str!("defaults.toml"), FileFormat::Toml))?;
// Find an optional "--config" command-line argument
let mut args = env::args();
while let Some(arg) = args.next() {
if arg == "--config" {
break;
}
}
if let Some(path) = args.next() {
s.merge(File::with_name(&path))?;
}
// Load an optional local file (useful for development)
s.merge(File::with_name("local").required(false))?;
Ok(s)
}
Actix: routing & extraction
App::with_state(api.clone())
.route("/", Method::GET, fetch)
pub fn fetch(
log: RequestLogger,
state: State<ApiContext>,
req: Query<FetchRequest>)
-> impl Responder {
slog_info!(log, "Fetching comments");
risso_api::fetch(&state, req.into_inner())
.map(Json)
.responder()
}
#[derive(Debug, Deserialize)]
pub struct FetchRequest {
uri: String,
parent: Option<CommentId>,
limit: Option<i32>,
nested_limit: Option<usize>,
after: Option<DateTime<Utc>>,
plain: Option<i32>,
}
https://risso.rs/?uri=/blog/great-post
The power of Rust generic traits
pub struct Query<T>(T);
impl<T> Query<T> {
pub fn into_inner(self) -> T {
self.0
}
}
impl<T, S> FromRequest<S> for Query<T> {
type Result = Result<Self, Error>;
#[inline]
fn from_request(req: &HttpRequest<S>) -> Self::Result {
serde_urlencoded::from_str::<T>(req.query_string())
.map_err(|e| e.into())
.map(Query)
}
}
pub trait FromRequest<S> {
/// Future that resolves to a Self
type Result: Into<AsyncResult<Self>>;
/// Convert request to a Self
fn from_request(req: &HttpRequest<S>)
-> Self::Result;
}
impl<T> Deref for Query<T> {
type Target = T;
fn deref(&self) -> &T {
&self.0
}
}
zero cost abstraction
Path extraction with tuples
App::with_state(api.clone())
.route("/id/{id}/unsubscribe/{email}/{key}", Method::GET, unsubscribe)
fn unsubscribe(state: State<ApiContext>, path: Path<(String, String, String)>)
-> impl Responder {
let (id, email, key) = path.into_inner();
risso_api::unsubscribe(&state, id, email, key)
.map(Json)
.responder()
}
Diesel: struct - relational mapping
Similar to JOOQ in Java:
● define your schema
● define associated data structures
● write strongly-typed SQL in a Rust DSL
Also handles migrations
Works with the r2d2 connection pool
Diesel: schema
table! {
comments (id) {
#[sql_name = "tid"]
thread_id -> Integer,
id -> Integer,
parent -> Nullable<Integer>,
created -> Double,
modified -> Nullable<Double>,
mode -> Integer,
remote_addr -> Text,
text -> Text,
author -> Nullable<Text>,
email -> Nullable<Text>,
website -> Nullable<Text>,
likes -> Integer,
dislikes -> Integer,
notification -> Bool,
}
}
table! {
threads (id) {
id -> Integer,
uri -> Text, // Unique
title -> Text,
}
}
joinable!(comments -> threads (thread_id));
#[derive(Queryable)]
pub struct Thread {
pub id: i32,
pub uri: String,
pub title: String,
}
Diesel: query
// Comment count for main thread and all reply threads for one url.
let stmt = comments::table
.inner_join(threads::table)
.select((comments::parent, count_star()))
.filter(threads::uri.eq(uri)
.and(CommentMode::mask(mode))
.and(comments::created.gt(after)),
).group_by(comments::parent);
trace!("{:?}", diesel::debug_query(&stmt));
let result = stmt.load(cnx);
SELECT comments.parent,count(*)
FROM comments INNER JOIN threads ON
threads.uri=? AND comments.tid=threads.id AND
(? | comments.mode = ?) AND
comments.created > ?
GROUP BY comments.parent
Diesel: use a thread pool!
let future_comments = ctx.spawn_db(move |cnx| {
comments::table.inner_join(threads::table)
.select(comments::all_columns)
.load(cnx)
});
future_comments.map(|comments| {
// render markdown, etc
});
Diesel: use a thread pool!
let thread_pool = tokio_threadpool::Builder::new()
.name_prefix("risso-api")
.keep_alive(Some(std::time::Duration::from_secs(30)))
.pool_size(config.max_connections as usize)
.build();
pub fn spawn_db<F, T, E>(&self, f: F)
-> impl Future<Item = T, Error = failure::Error>
where
E: std::error::Error,
F: FnOnce(&Connection) -> Result<T, E>,
{
oneshot::spawn_fn(
move || {
let cnx = cnx_pool.get()?;
f(&cnx)
},
&self.self.thread_pool.sender(),
)
}
Logs
logs: de facto standard
● simple API, lots of appenders/backends (log4rs )
● exposes a set of macros
let env = env_logger::Env::default()
.filter_or(env_logger::DEFAULT_FILTER_ENV, "info");
env_logger::Builder::from_env(env).init();
error!("Oops");
warn!("Attention");
info!("Starting...");
debug!("Been there");
trace!("{:?}", diesel::debug_query(&q));
ERROR 2018-11-07T17:52:07Z: risso_api::models: Oops
WARN 2018-11-07T17:52:07Z: risso_api::models: Attention
INFO 2018-11-07T17:52:07Z: risso_api::models: Starting...
slog: structured logs
let json_drain = slog_json::Json::default(std::io::stderr());
let drain = drain.filter_level(Level::Info);
let drain = slog_async::Async::new(drain.fuse()).build().fuse();
let log = Logger::root(
drain,
slog_o!("location" => FnValue(|info : &Record| {
format!("{}:{}", info.module(), info.line())
})),
);
slog: structured logs
{"msg":"Starting...","level":"INFO","ts":"2018-11-07T18:58:12.077454+01:00",
"location":"risso_actix:110"}
{"msg":"Using database at temp/comments.db with max 10 connections.",
"level":"INFO","ts":"2018-11-07T18:58:12.083808+01:00",
"location":"risso_api::context:36"}
{"msg":"Starting 8 workers","level":"INFO","ts":"2018-11-07T18:58:12.094084+01:00","
location":"actix_net::server::server:201"}
{"msg":"Starting server on 127.0.0.1:8080","level":"INFO","ts":"2018-11-07T18:58:12.106399+01:00",
"location":"actix_net::server::server:213"}
slog: tracing requests
curl -v localhost:8080/?uri=/blog/great-post/
HTTP/1.1 200 OK
content-type: application/json
request-id: fSBClUEnHy
App::with_state(api.clone())
.route("/", Method::GET, fetch)
.middleware(actix_web_requestid::RequestIDHeader)
slog: tracing requests
impl<S> FromRequest<S> for RequestLogger {
fn from_request(req: &HttpRequest<S>) -> Self::Result {
let new_log = slog_scope::logger().new(o!("request_id" => req.request_id()));
Ok(RequestLogger(new_log))
}
}
{"msg":"Fetching comments","level":"INFO","ts":"2018-11-08T09:27:54.266083+01:00",
"request_id":"fSBClUEnHy","location":"risso_actix:91"}
pub fn fetch(log: RequestLogger, ...) -> impl Responder {
slog_info!(log, "Fetching comments");
...
Monitoring: prometheus
impl MetricsMiddleware {
pub fn new() -> Result<MetricsMiddleware, failure::Error> {
let histogram_opts = HistogramOpts::new("req_time", "http processing time");
let histogram = HistogramVec::new(histogram_opts, &["status"])?;
registry::register(Box::new(histogram.clone()))?;
Ok(MetricsMiddleware { histogram })
}
}
let secs = duration_to_seconds(start.elapsed());
self.histogram
.with_label_values(&[response.status().as_str()])
.observe(secs);
Monitoring: prometheus
curl -v localhost:8080/metrics
# HELP req_time http processing time
# TYPE req_time histogram
req_time_bucket{status="200",le="0.01"} 9
req_time_bucket{status="200",le="0.1"} 9
req_time_bucket{status="200",le="1"} 9
req_time_bucket{status="200",le="10"} 9
req_time_sum{status="200"} 0.022794493
req_time_count{status="200"} 9
req_time_bucket{status="404",le="0.01"} 1
req_time_bucket{status="404",le="0.1"} 1
req_time_bucket{status="404",le="1"} 1
req_time_bucket{status="404",le="10"} 1
req_time_sum{status="404"} 0.000518249
req_time_count{status="404"} 1
Dockerization
muslrust-builder := docker run --rm -it
-v $(PWD):/volume
-v muslrust-cache:/root/.cargo
clux/muslrust:1.30.0-stable
build-musl:
$(muslrust-builder) cargo build 
--package risso_actix --release
$(muslrust-builder) strip --only-keep-debug 
target/release/risso_actix
docker-image: build-musl
docker build -t risso-actix .
FROM scratch
WORKDIR /risso
COPY target/release/risso_actix .
CMD ["/risso/risso_actix"]
Makefile Dockerfile
8 MB!!!
Conclusion
● Are we web yet? Yes!
● If it compiles, it runs (minus logic bugs)
● It’s fast
● It’s small
● Rust is different: you have to learn it
● async / await is coming, frameworks will have to adapt
What’s next?
● Finish the code…
● Serverless
rust-aws-lambda: port of the AWS Go SDK
Web Assembly running in node
● Administration/moderation front-end
yew - a React-like Rust framework to build SPAs in Web Assembly
Thanks!
Questions? Let’s meet outside!
- stickers!

Developing web applications in Rust

  • 1.
    Using Rust todevelop web applications An exploration of the Rust ecosystem
  • 2.
    Who’s this guy? SylvainWallez - @bluxte Tech lead - Elastic Cloud Previously tech lead, CTO, architect, trainer, developer… ...at OVH, Actoboard, Sigfox, Scoop.it, Joost, Anyware Member of the Apache Software Foundation since 2003 - we’re hiring!
  • 3.
    On the menu ●Rust for webapps? Why? ● Architecture ● Handling http requests ● Database access ● Logs & metrics ● Docker-ization ● Conclusion
  • 4.
    How it allstarted #3 on HackerNews “Wait – how does this work in Rust, this “other” recent low-level language?” “Ooooh, Rust is sweet!” “Can we do webapps with Rust?” “Yes, we can! And it’s quite nice!”
  • 5.
    Rust “Rust is asystems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety” – https://www.rust-lang.org/ ● Zero cost abstractions, threads without data races, minimal runtime ● No garbage collector, guaranteed memory safety ● Type inference, traits, pattern matching, type classes, higher-order functions
  • 6.
    Learning Rust Also onlineat https://www.rust-lang.org/
  • 7.
    The Rust ecosystem ●crates.io – there’s a crate for that! ● Twitter: @rustlang, @ThisWeekInRust ● https://users.rust-lang.org ● https://exercism.io/ ● http://www.arewewebyet.org/ ● http://arewegameyet.com/ ● https://areweideyet.com/ ● http://www.arewelearningyet.com/
  • 8.
    The project: myblog’s comment server It’s in Python, let’s rewrite it in Rust Small, but covers a lot: ● web api, data validation, CORS ● database access ● markdown, rendering, HTML sanitization ● sending emails ● admin front-end Code at https://github.com/swallez/risso
  • 9.
  • 10.
  • 11.
    Cargo.toml [package] name = "risso_actix" version= "0.1.0" description = "Actix-web server front-end to risso_api" authors = ["Sylvain Wallez <sylvain@bluxte.net>"] license = "Apache-2.0" [dependencies] actix = "0.7" actix-web = "0.7" actix-web-requestid = "0.1.2" serde = "1.0.80" serde_derive = "1.0.80" failure = "0.1.3" lazy_static ="1.1" maplit = "1.0.1" intern = "0.2.0" futures = "0.1" log = "0.4.6" env_logger = "0.5.13" slog = "2.4.1" slog-term = "2.4.0" slog-json = "2.2.0" slog-async = "2.3.0" slog-scope = "4.0.1" prometheus = "0.4.2" risso_api = { path = "../risso_api" }
  • 12.
    main pub fn main()-> Result<(), failure::Error> { info!("Starting..."); let config = risso_api::CONFIG.get::<ActixConfig>("actix")?; let listen_addr = config.listen_addr; let allowed_origins = config.allowed_origins; let api_builder = ApiBuilder::new()?; let api = api_builder.build(); let srv = server::new(move || { App::with_state(api.clone()) .route("/", Method::GET, fetch) .route("/new", Method::POST, new_comment) .route("/id/{id}", Method::GET, view) .route("/id/{id}/unsubscribe/{email}/{key}", Method::GET, unsubscribe) .route("/metrics", Method::GET, metrics::handler) // ... .middleware(build_cors(&allowed_origins)) }); srv.bind(listen_addr)?.run(); Ok(()) }
  • 13.
    config [actix] listen_addr = "127.0.0.1:8080" allowed_origins= [] [database] db_path = "data/comments.db" min_connections = 1 max_connections = 10 #[derive(Deserialize)] pub struct ActixConfig { listen_addr: String, allowed_origins: Vec<String>, } #[derive(Deserialize)] struct ContextConfig { db_path: String, min_connections: u32, max_connections: u32, } ???serde config.toml
  • 14.
    serde: swiss-army knifeof data serialization Macros and traits that generate key/value (de)constructors. Libraries providing (de)constructors for specific serialization formats Any data structure JSON, CBOR, YAML, MessagePack, TOML, GOB, Pickle, RON, BSON, Avro, URL x-www-form-urlencoded, XML, env vars, AWS Parameter Store and many more...
  • 15.
    config pub fn load_config()-> Result<Config, ConfigError> { let mut s = Config::new(); // Load defaults s.merge(File::from_str(include_str!("defaults.toml"), FileFormat::Toml))?; // Find an optional "--config" command-line argument let mut args = env::args(); while let Some(arg) = args.next() { if arg == "--config" { break; } } if let Some(path) = args.next() { s.merge(File::with_name(&path))?; } // Load an optional local file (useful for development) s.merge(File::with_name("local").required(false))?; Ok(s) }
  • 18.
    Actix: routing &extraction App::with_state(api.clone()) .route("/", Method::GET, fetch) pub fn fetch( log: RequestLogger, state: State<ApiContext>, req: Query<FetchRequest>) -> impl Responder { slog_info!(log, "Fetching comments"); risso_api::fetch(&state, req.into_inner()) .map(Json) .responder() } #[derive(Debug, Deserialize)] pub struct FetchRequest { uri: String, parent: Option<CommentId>, limit: Option<i32>, nested_limit: Option<usize>, after: Option<DateTime<Utc>>, plain: Option<i32>, } https://risso.rs/?uri=/blog/great-post
  • 19.
    The power ofRust generic traits pub struct Query<T>(T); impl<T> Query<T> { pub fn into_inner(self) -> T { self.0 } } impl<T, S> FromRequest<S> for Query<T> { type Result = Result<Self, Error>; #[inline] fn from_request(req: &HttpRequest<S>) -> Self::Result { serde_urlencoded::from_str::<T>(req.query_string()) .map_err(|e| e.into()) .map(Query) } } pub trait FromRequest<S> { /// Future that resolves to a Self type Result: Into<AsyncResult<Self>>; /// Convert request to a Self fn from_request(req: &HttpRequest<S>) -> Self::Result; } impl<T> Deref for Query<T> { type Target = T; fn deref(&self) -> &T { &self.0 } } zero cost abstraction
  • 20.
    Path extraction withtuples App::with_state(api.clone()) .route("/id/{id}/unsubscribe/{email}/{key}", Method::GET, unsubscribe) fn unsubscribe(state: State<ApiContext>, path: Path<(String, String, String)>) -> impl Responder { let (id, email, key) = path.into_inner(); risso_api::unsubscribe(&state, id, email, key) .map(Json) .responder() }
  • 22.
    Diesel: struct -relational mapping Similar to JOOQ in Java: ● define your schema ● define associated data structures ● write strongly-typed SQL in a Rust DSL Also handles migrations Works with the r2d2 connection pool
  • 23.
    Diesel: schema table! { comments(id) { #[sql_name = "tid"] thread_id -> Integer, id -> Integer, parent -> Nullable<Integer>, created -> Double, modified -> Nullable<Double>, mode -> Integer, remote_addr -> Text, text -> Text, author -> Nullable<Text>, email -> Nullable<Text>, website -> Nullable<Text>, likes -> Integer, dislikes -> Integer, notification -> Bool, } } table! { threads (id) { id -> Integer, uri -> Text, // Unique title -> Text, } } joinable!(comments -> threads (thread_id)); #[derive(Queryable)] pub struct Thread { pub id: i32, pub uri: String, pub title: String, }
  • 24.
    Diesel: query // Commentcount for main thread and all reply threads for one url. let stmt = comments::table .inner_join(threads::table) .select((comments::parent, count_star())) .filter(threads::uri.eq(uri) .and(CommentMode::mask(mode)) .and(comments::created.gt(after)), ).group_by(comments::parent); trace!("{:?}", diesel::debug_query(&stmt)); let result = stmt.load(cnx); SELECT comments.parent,count(*) FROM comments INNER JOIN threads ON threads.uri=? AND comments.tid=threads.id AND (? | comments.mode = ?) AND comments.created > ? GROUP BY comments.parent
  • 25.
    Diesel: use athread pool! let future_comments = ctx.spawn_db(move |cnx| { comments::table.inner_join(threads::table) .select(comments::all_columns) .load(cnx) }); future_comments.map(|comments| { // render markdown, etc });
  • 26.
    Diesel: use athread pool! let thread_pool = tokio_threadpool::Builder::new() .name_prefix("risso-api") .keep_alive(Some(std::time::Duration::from_secs(30))) .pool_size(config.max_connections as usize) .build(); pub fn spawn_db<F, T, E>(&self, f: F) -> impl Future<Item = T, Error = failure::Error> where E: std::error::Error, F: FnOnce(&Connection) -> Result<T, E>, { oneshot::spawn_fn( move || { let cnx = cnx_pool.get()?; f(&cnx) }, &self.self.thread_pool.sender(), ) }
  • 27.
    Logs logs: de factostandard ● simple API, lots of appenders/backends (log4rs ) ● exposes a set of macros let env = env_logger::Env::default() .filter_or(env_logger::DEFAULT_FILTER_ENV, "info"); env_logger::Builder::from_env(env).init(); error!("Oops"); warn!("Attention"); info!("Starting..."); debug!("Been there"); trace!("{:?}", diesel::debug_query(&q)); ERROR 2018-11-07T17:52:07Z: risso_api::models: Oops WARN 2018-11-07T17:52:07Z: risso_api::models: Attention INFO 2018-11-07T17:52:07Z: risso_api::models: Starting...
  • 28.
    slog: structured logs letjson_drain = slog_json::Json::default(std::io::stderr()); let drain = drain.filter_level(Level::Info); let drain = slog_async::Async::new(drain.fuse()).build().fuse(); let log = Logger::root( drain, slog_o!("location" => FnValue(|info : &Record| { format!("{}:{}", info.module(), info.line()) })), );
  • 29.
    slog: structured logs {"msg":"Starting...","level":"INFO","ts":"2018-11-07T18:58:12.077454+01:00", "location":"risso_actix:110"} {"msg":"Usingdatabase at temp/comments.db with max 10 connections.", "level":"INFO","ts":"2018-11-07T18:58:12.083808+01:00", "location":"risso_api::context:36"} {"msg":"Starting 8 workers","level":"INFO","ts":"2018-11-07T18:58:12.094084+01:00"," location":"actix_net::server::server:201"} {"msg":"Starting server on 127.0.0.1:8080","level":"INFO","ts":"2018-11-07T18:58:12.106399+01:00", "location":"actix_net::server::server:213"}
  • 30.
    slog: tracing requests curl-v localhost:8080/?uri=/blog/great-post/ HTTP/1.1 200 OK content-type: application/json request-id: fSBClUEnHy App::with_state(api.clone()) .route("/", Method::GET, fetch) .middleware(actix_web_requestid::RequestIDHeader)
  • 31.
    slog: tracing requests impl<S>FromRequest<S> for RequestLogger { fn from_request(req: &HttpRequest<S>) -> Self::Result { let new_log = slog_scope::logger().new(o!("request_id" => req.request_id())); Ok(RequestLogger(new_log)) } } {"msg":"Fetching comments","level":"INFO","ts":"2018-11-08T09:27:54.266083+01:00", "request_id":"fSBClUEnHy","location":"risso_actix:91"} pub fn fetch(log: RequestLogger, ...) -> impl Responder { slog_info!(log, "Fetching comments"); ...
  • 32.
    Monitoring: prometheus impl MetricsMiddleware{ pub fn new() -> Result<MetricsMiddleware, failure::Error> { let histogram_opts = HistogramOpts::new("req_time", "http processing time"); let histogram = HistogramVec::new(histogram_opts, &["status"])?; registry::register(Box::new(histogram.clone()))?; Ok(MetricsMiddleware { histogram }) } } let secs = duration_to_seconds(start.elapsed()); self.histogram .with_label_values(&[response.status().as_str()]) .observe(secs);
  • 33.
    Monitoring: prometheus curl -vlocalhost:8080/metrics # HELP req_time http processing time # TYPE req_time histogram req_time_bucket{status="200",le="0.01"} 9 req_time_bucket{status="200",le="0.1"} 9 req_time_bucket{status="200",le="1"} 9 req_time_bucket{status="200",le="10"} 9 req_time_sum{status="200"} 0.022794493 req_time_count{status="200"} 9 req_time_bucket{status="404",le="0.01"} 1 req_time_bucket{status="404",le="0.1"} 1 req_time_bucket{status="404",le="1"} 1 req_time_bucket{status="404",le="10"} 1 req_time_sum{status="404"} 0.000518249 req_time_count{status="404"} 1
  • 34.
    Dockerization muslrust-builder := dockerrun --rm -it -v $(PWD):/volume -v muslrust-cache:/root/.cargo clux/muslrust:1.30.0-stable build-musl: $(muslrust-builder) cargo build --package risso_actix --release $(muslrust-builder) strip --only-keep-debug target/release/risso_actix docker-image: build-musl docker build -t risso-actix . FROM scratch WORKDIR /risso COPY target/release/risso_actix . CMD ["/risso/risso_actix"] Makefile Dockerfile 8 MB!!!
  • 35.
    Conclusion ● Are weweb yet? Yes! ● If it compiles, it runs (minus logic bugs) ● It’s fast ● It’s small ● Rust is different: you have to learn it ● async / await is coming, frameworks will have to adapt
  • 36.
    What’s next? ● Finishthe code… ● Serverless rust-aws-lambda: port of the AWS Go SDK Web Assembly running in node ● Administration/moderation front-end yew - a React-like Rust framework to build SPAs in Web Assembly
  • 37.
    Thanks! Questions? Let’s meetoutside! - stickers!