Generate Rust tests from data files

Sometimes you just have a bunch of example data laying around and you want to make sure your code works with all of them. Some of them are probably short and sweet and could live happily as doctests, which are amazing btw. But some of them are more awkward to present in such form, because, for example, of their size or number. Typically when you have an example of how the program should behave you write an example-based unit test. Ideally, each of them would represent an isolated example and they should fail independently. But, converting your source data files into a unit test one by one, manually, can be a bit tedious.

Rust build scripts to the rescue !

What if you could could just iterate over the data files you have already and then produce unit tests accordingly ? What follows is an example of such, where we iterate over directories and generate one unit test per each, assuming all of them contain files named according to our convention.

I chose to generate integration tests here, but you can generate pretty much any code using this technique.


// include tests generated by ``, one test per directory in tests/data
include!(concat!(env!("OUT_DIR"), "/"));

use std::env;
use std::fs::read_dir;
use std::fs::DirEntry;
use std::fs::File;
use std::io::Write;
use std::path::Path;

// build script's entry point
fn main() {
    let out_dir = env::var("OUT_DIR").unwrap();
    let destination = Path::new(&out_dir).join("");
    let mut test_file = File::create(&destination).unwrap();

    // write test file header, put `use`, `const` etc there
    write_header(&mut test_file);

    let test_data_directories = read_dir("./tests/data/").unwrap();

    for directory in test_data_directories {
        write_test(&mut test_file, &directory.unwrap());

fn write_test(test_file: &mut File, directory: &DirEntry) {
    let directory = directory.path().canonicalize().unwrap();
    let path = directory.display();
    let test_name = format!("prefix_if_needed_{}", directory.file_name().unwrap().to_string_lossy());

        name = test_name,
        path = path

fn write_header(test_file: &mut File) {
use crate_under_test::functionality_under_test;


fn {name}() {{
    let input = include_str!("{path}/input-data");
    let expected_output = include_str!("{path}/output-data");

    let actual_output = functionality_under_test(input);

    assert_eq!(expected_output, actual_output);

So to recap - first the script creates $OUT_DIR/ file containing all the generated tests code. The compiler does not know there are tests to launch using normal integration tests procedure there though, so then we use tests/ to tell it so, basically including the generated Rust code into that file. After the compilation proceeds normally, giving us one unit test per directory, giving us ability to pinpoint test cases that are problematic more precisely.

You can then further improve on that, e.g. add more directory structure, split tests into modules etc - you can generate any Rust code this way.

Happy hacking !

p.s. there are more Rust testing tricks and let me know if you'd like to pair program with me on anything !

Testing tricks in Rust

Use verbs as test module names

Who said that the test module needs to be named test ?
Experiment with different module names, pay attention to how the test runner displays the results.

A structure that I like, an example:

// some production code here

mod should {

    fn consume_message_from_queue() {
        // mock queue, create worker with that queue injected
        // start worker
        // check if queue's 'get_message' was invoked


Cargo prints worker::should::consume_message_from_queue when running this test, which reads nicely and exposes the requirement.

Interior mutability for controlling state of variables injected from the test

Use e.g. the atomic types family or RefCell itself to get an immutable handle to a internally mutable data.
Useful when you don't want your production code to accept anything that can mutate but you still want to control that value from the test.

See injecting the system clock example in my previous blog post.

Write the test first

Not really a Rust trick, but hey.
Try writing your test first, before production code.
If you're building a feature or fixing a bug that will affect external behaviour - start with an integration test at the crate level.

Try thinking what would be the ideal code you would like to interact with, what would be the types, what would be the functions ? A broad-strokes scenario, not caring much about implementation details, not caring much about covering all edge cases. Write that code. It does not compile. But it looks nice, you're pleased.

Read through again, add assertions. Add the types. For each missing feature or a bug that is present in this high level scenario - write a unit test. Satisfy that test with changes to production code. Maybe refactor a bit in between. Once the big test is green - you're done !

There is no Rust-focused TDD book just yet for me to recommend, but here, have some for other languages:

  • Kent Beck - Test Driven Development: By Example
  • Steve Freeman, Nat Pryce - Growing Object-Oriented Software, Guided by Tests

Rust allows for more cool tricks and generally writing less test code than mentioned in these books, so please use your judgment - and the tricks from this article !

Let's talk !

Have any questions ? Would like to pair on Rust ? Curious about TDD ? Ping me ! Email is good - or try Twitter.

thanks !

Rust: controlling side effects from the test.

Rust: controlling side effects from the test.

Hello and welcome to the newest episode on testing in Rust.
Imagine you want to write a timestamping repository of some sorts, that will associate the timestamp of when the storage operation was invoked with the stored value. How to write it in Rust ? And more importantly - how to test it ? I would like to share a solution I found and talk a bit about how it works.

Please note that this solution can be used anywhere where you need to pass a handle that is remembered by the production code, and that thing it points to - you then want to change from the test.

trait Clock {
    fn now(&self) -> Instant;

struct SystemClock;

impl SystemClock {
    fn new() -> Self {
        SystemClock {}

impl Clock for SystemClock {
    fn now(&self) -> Instant {

struct TimestampingRepository<'a, ClockType>
    ClockType: Clock + 'a,
    clock: &'a ClockType,
    storage: Vec<(Instant, u32)>, // (timestamp, value)

impl<'a, ClockType> TimestampingRepository<'a, ClockType>
    ClockType: Clock + 'a,
    fn with_clock(clock: &'a ClockType) -> Self {
        TimestampingRepository {
            storage: vec![],

    fn store(&mut self, value: u32) {, value));

    fn all_stored(&self) -> Vec<(Instant, u32)> {

mod should {

    fn handle_seconds() {
        let clock = FakeClock::with_time(Instant::now());
        let mut repository = TimestampingRepository::with_clock(&clock);;

        let time_difference = time_difference_between_two_stored(repository);

        assert_eq!(32, time_difference.as_secs());

    struct FakeClock {
        now: Instant,
        move_by_secs: AtomicUsize,

    impl FakeClock {
        fn with_time(now: Instant) -> Self {
            FakeClock {
                move_by_secs: AtomicUsize::new(0),

        // WAT no `mut`
        fn move_by(&self, duration: Duration) {
                .store(duration.as_secs() as usize, Ordering::SeqCst);

    impl Clock for FakeClock {
        fn now(&self) -> Instant {
            let move_by_secs = self.move_by_secs.load(Ordering::SeqCst) as u64;
   + Duration::from_secs(move_by_secs)


That's a lot of code. And I already skipped uses and some definitions to make it less.
If you want to get the full source code that to follow along - try this playground or this repo for the full project including production code usage.

Let's start with the test itself.
The clock appears to be immutable (immovable) in the test, yet we call move_by on it and the whole thing appears to be working somehow. First question: can't we just make the clock mutable and skip all this ? It appears that sadly (but fortunately) Rust prevents us from doing so. We cannot both have a immutable and mutable borrow of the clock in the same scope. For the full example with an error go here.

What is this sorcery then ?
We use a type that provides Interior Mutability, namely AtomicUsize.
On the outside - it look immutable, yet it provides a thread-safe and very narrow method of mutating the underlying state. As we trust AtomicUsize to be written correctly, we can then proceed and write our Rust code as usual, relying fully on the borrow checker. Rust compiler is happy and our test code is happy.

I wouldn't use this as a pattern in production code - the borrow checker rules are there for a reason.
Please treat it as an escape hatch to be used in specific situations, situations like this.

Happy Rusting !

p.s. if you'd like to chat about Rust - book some time with me !

Resources for starting your adventure with Rust

As I've been running several intro to Rust sessions throughout the last year, I've assembled a set of resources that help people ease into the language.

Depending on your learning style you might like:

Rustlings - This is a good set of starter exercises if you want to have a feeling for the language - have links to relevant book sections for each exercises so you can either start with the book or trying to figure it out yourself first. Ah, and it uses the Playground, which means you don't need to install anything on your machine to start.
The book itself - Second edition. Good when you want a solid baseline understanding of the language first.
Rust by example - An set of examples that are runnable within a browser, intertwined with explanatory prose.
Exercism’s Rust exercises - a CLI app that guides you through exercises of increasing difficulty.
IntoRust - A set of short screencasts for the foundational topics.

Make sure to stay up to date with:

This week in Rust
Awesome Rust

And contribute back !

Don't forget to join the user forums for the warm welcome.

Finally, if you'd like someone to ask questions to or pair program with, book some time with me.

I’m running Rust pair programming sessions !

Why ? Rust has such a wonderful community and I want to give back as much as I can.
I am not an expert in Rust but I am not a beginner either. In addition to that I love pair programming !
The result is always much better than I could produce myself. I am happy to both share the knowledge and learn.

I would love to pair with you !
If you’re a new Rustacean, fresh to the language - come on in ! If you’re an expert - welcome !

We can work on any of the following:

  • Any project of yours !
  • Contribute back to a larger open source project (I am a contributor to e.g. cargo, rustc and rustup)
  • A project of mine - e.g. genpass

Click here or ping me an email to schedule a session - can be a remote one or in person somewhere in London.

Thank you !

Configure AWS Elastic Beanstalk Docker environment variables

AWS Beanstalk is a good 'intermediate' level hosting for Docker containers. It gives you load balancing and scalability pretty much out of the box in exchange for being a bit more opaque to configure. The Docker bits are a bit more hidden away there. In a typical production setup you would want to have Docker images not containing anything environment related, e.g. to be able to run them both in production and locally. An easy way to achieve that with Docker is via environment variables. On the local environment it's docker run --env NAME=VALUE - what would be a Beanstalk equivalent though ?

It turns out that Beanstalk has a magical configuration directory structure that you can pass to an environment. It goes like this: 

Where is your regular Docker definition file for Beanstalk, can look like this:

    "AWSEBDockerrunVersion": "1",
    "Image": {
        "Name": "image:latest",
        "Update": "true"
    "Ports": [
        "ContainerPort": "1234"

While .ebextensions/environmentvariables.config is where, well, you set the environment variables that will be defined in the container. Example:

  - option_name: ENV_VAR1
    value: "some value"
  - option_name: ENV_VAR2
    value: "some other value"

But wait, there's more ! Get the zip file and upload it to some S3 bucket, I'm going to assume that the file is at BUCKET_NAME/CONFIG_PATH in the example below. Then you need to tell Beanstalk where the file is located. This can be achieved by creating a new application version:

aws elasticbeanstalk create-application-version --application-name APPLICATION_NAME --version-label VERSION --source-bundle S3Bucket=BUCKET_NAME,S3Key=CONFIG_PATH
aws elasticbeanstalk update-environment --environment-name ENVIRONMENT_NAME --version-label VERSION

Waiting for AWS Elastic Beanstalk environment to become ready

Elastic Beanstalk on AWS seems to be one of those services that are pretty cool but it's hard to get to know them. One of the tasks you may encounter while working with it is that after making some change to its configuration you would like to wait for it to be finished before proceeding further. The change may be setting an environment variable but can also be deploying a new version of the application. I created a small bash script to help with that, can be useful when you try to run this process unattended, e.g. from CI.

set -e
set -o pipefail


function getStatus() {
echo `aws elasticbeanstalk describe-environments \
    --application-name $application_name --environment-name $environment_name |\
    jq -r '.Environments | .[]?' | jq -r '.Status'`


echo "Waiting for a maximum of $timeout_seconds seconds for $environment_name to become ready"
while [[ ( $status != "Ready" ) && ( $iterations -lt $max_iterations_count ) ]]; do
    echo $status
    sleep $sleep_time_seconds

Happy coding !

Setting up Rust development environment using VSCode on a Mac

This post is a part of the upcoming series on different ways of setting up your Rust development environment. It's time for VSCode.

Completion and highlighting

While on Linux VSCode with the Rust plugin seems to work more or less out of the box, on a Mac I needed to spend some time configuring it.

First things first though, let's start by installing Rust version manager, rustup.

curl -sSf | sh

We will be using nightly version of rust as to have one version that can compile all of our tools. This is mostly due to clippy requiring a nightly compiler.

rustup install nightly
rustup default nightly

We will need Rust Language Server to provide the code completion.

rustup component add rls-preview --toolchain nightly
rustup component add rust-analysis --toolchain nightly
rustup component add rust-src --toolchain nightly

For a more wholesome experience, please have some tools as well:

cargo install clippy rustfmt rustsym

Now finally, for the VSCode itself, press cmd-p and ext install vscode-rust. I'm using the new Rust extension as Rusty Code has been discontinued.

If you're lucky - that's it, you should have working completion and highlighting in Rust files. Check this by opening any Rust source code file. If you're greeted by this message: You have chosen RLS mode but neither RLS executable path is specified nor rustup is installed - then we need to get the extension to get to know your setup a bit:

In VSCode go to Settings using cmd-, and put the following config elements there:

    "rust.cargoPath": "/Users/yourusername/.cargo/bin/cargo",
    "rust.cargoHomePath": "/Users/yourusername/.cargo",
    "rust.rustfmtPath": "/Users/yourusername/.cargo/bin/rustfmt",
    "rust.rustsymPath": "/Users/yourusername/.cargo/bin/rustsym",
    "rust.rustLangSrcPath": "/Users/yourusername/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/src/rust/src",
    "rust.mode": "rls",
    "rust.rls": {
        "executable": "/Users/yourusername/.cargo/bin/rls",
        "useRustfmt": true

As the paths in the config need to be absolute, remember to adjust to your situation (system username) accordingly.

Now when you reload and start editing a Rust file you should see RLS: Analysis finished on the bottom bar and the completion and highlighting should all work. Yay !

Building and testing

VSCode has a system of tasks that we can leverage to run the build and test from within VSCode. If you go to Tasks->Configure tasks it will create an empty tasks.json file in your repository. Change it to the following to allow for cargo to be hooked up as your build tool and test runner.

    "version": "2.0.0",
    "tasks": [
            "label": "build",
            "type": "shell",
            "command": "cargo build",
            "group": {
                "kind": "build",
                "isDefault": true
            "problemMatcher": []
            "label": "test",
            "type": "shell",
            "command": "cargo test",
            "group": {
                "kind": "test",
                "isDefault": true

You can use cmd-shift-b to run the build now.


For the native debugger to work we need to install another extension to VSCode called 'LLDB Debugger'. That would be cmd-p and ext install vadimcn.vscode-lldb.

After reloading VSCode you should be able to set breakpoints on the side gutter and run the program using debugger by pressing F5. First time doing this will result in the debugger choice window. Choose LLDB Debugger as your debugger and you will be greeted with a JSON configuration file in which you need to tell the debugger a few details on your project. It may look like this:

    "version": "0.2.0",
    "configurations": [
            "type": "lldb",
            "request": "launch",
            "name": "Debug",
            "program": "${workspaceRoot}/target/debug/name_of_your_executable",
            "args": [],
            "cwd": "${workspaceRoot}",
            "preLaunchTask": "build"

And that should be it !

Now you should be able to set breakpoints and debug through the code.

Start the debugging session by pressing F5 again - this should result in the build proceeding and then the debugger launching.

Questions ?

Any questions ? Ask on and ping me the link to the post on Twitter or email it to me at This way the answer will be visible to everyone in the community.

Keep on Rusting !

Adding graphs to posts in Nikola

I really like to teach, try to explain things in a simple manner. There is often no better way of making an explanation than visualizing it. The problem is that I really can't draw, especially on a computer. Wouldn't it be awesome if I could make the computer draw for me ? I found out that, unsurprisingly, there is a software for that already. The one I like is called mermaid - it renders a simple text description of a graph or diagram into an html representation. Can look something like this.

graph TB subgraph one a1-->a2 end subgraph two b1-->b2 end subgraph three c1-->c2 end c1-->a2

This blog is rendered by Nikola hence I would like to show you how I've added mermaid support to my Nikola installation. I use USE_BUNDLES = False in as for it gives me more control and is more HTTP/2 friendly. With that disabled I can include mermaid's style and js files like so (also in

<link rel="stylesheet" type="text/css" href="/assets/css/fontawesome.css">
<link rel="stylesheet" type="text/css" href="/assets/css/titillium.css">
<link rel="stylesheet" type="text/css" href="/assets/css/mermaid.forest.css">

BODY_END = """
<script src="/assets/js/mermaid.js"></script>
<script>mermaid.initialize({startOnLoad:true, cloneCssStyles: false});</script>

Where do all these files come from though ? In my case, I have a custom theme, based on zen called zen-cyplo. The assets in the sources are located under themes/zen-cyplo/assets/. Oh, and cloneCssStyles: false is there as the default of true made the different css styles on my blog clash. Finally, to use mermaid in the post do (for reStructured Text):

.. raw:: html

        <div class="mermaid">
        graph TB
                        subgraph one
                        subgraph two
                        subgraph three

You can click on source button located below the title of this post to see it in action. If you are interested in the build process and how all these come together - the complete sources for this blog are hosted under

Upload your site to Netlify using their incremental deployment API

I've recently switched to a setup where I do all my builds for this blog on Travis. While doing so I needed to migrate away from using Netlify's internal build infrastructure. This resulted in a quick python script that allows you to upload arbitrary directory tree to Netlify and does so using their incremental deployment API. All that means that while this site is quite big in size the deployments go rather quickly ! There are some known issues but apart from them the script should just work for any custom Netlify deployment you would like to have. I use it on this very site, to have a preview of any PR before merging it as well as for deploying the main site after the PR is merged. I hope you will find it useful and please do not hesitate if you want to post an issue or a PR !