With RustConf 2020 coming tomorrow, I want to tell folks about how I have been “re-reading” The Rust Programming Language. Affectionately known as “The Book”, this resource is the ultimate guide on how to get started with the Rust programming language. You can buy it at almost any storefront, or view it online for free.
This post is aimed primarily at Rust beginners, and people who want to develop a deeper understanding of the language. Much like studying for exams, there are efficient ways to use your time, and uh… inefficient ways (read: playing Destiny 2)… to use your time.
Everyone has their own learning preferences and requirements, but I’d like to share a method that worked for me. It’s my hope that it may work for other folks reading this as well.
But first, let’s talk about why I am bothering with this. The resulting perspective may provide some insight into why I chose this method, and how you may relate to that experience.
After an “okay-ish” first pass (nearly a year ago), I tried editing, compiling, and using different projects throughout the ecosystem. My goal was to use trivial, Rust-based, software on a daily basis before I dove deeper.
I eventually created a gfold, a small, command-line application, as my first public foray into Rust. I polished the pet project, and proceeded to check out, edit, and compile, several well-known projects afterwards. Everything was rolling smoothly early on.
Well, months later, I wanted to shift my focus towards larger, non-trivial projects. I decided to double down and try (read: faceplant) to become an intermediate, or expert, user of the language.
Even though I primarily work in userspace, the promise of a non-garbage collected language, without the pitfalls of traditional memory management, is vital to my interests. As cloud-native circles look into edge compute, and IoT, I believe that Rust is already beginning to play a major role in their ecosystems.
The language’s community is also incredible. The amount of mentors, Discord groups, and people supporting each other is… yeah, incredible. I hope that future tech circles use this community as a blueprint going forward.
For all of those reasons and more, I decided to return to “The Book” to internalize the fundamentals of the language.
This strategy revolves around writing a crate (code that can be compiled into a binary, or library, in Rust) for every section of every chapter. While this might seem as rudimentary as writing some Bash and Make code, I actually leverage Cargo, Rust’s package manager, for the entire repository.
The goal is to have as little friction as possible between writing down notes/code, and reading the book. Even in a world of beautiful tablets and stylus/pencil devices, many opt for pencil and paper. Whether its for muscle memory, force of habit, accessibility, or just to get eyes off the screen, there are undeniable benefits to pencil and paper that technology has not (and may never) be able to implement.
To reduce friction as much as possible, I opted to write (type) down my chapter notes as comments within the “section crates” themselves. This system allows me to co-locate my notes with my Rust code, and learn a bit extra about Cargo along the way. It also keeps me on the keyboard, either writing code or taking notes in the same files.
That’s enough theory-crafting though. Time to get started!
You can use the version control system of your choice, but for those looking for guidance, I recommend using the following resources…
GitHub offers unlimited, free, private repositories for individual users. Since you most likely do not want to “learn out loud” with a public repository, toggling this feature is key.
For those new to the Git CLI, GitHub Desktop is a nice GUI for smaller, solo projects, like ours. While I recommend learning the CLI, I wanted to add an accessible option that folks have enjoyed using in the past.
Finally, if we are using Git, we have to make sure that we do not commit unwanted files.
Create a file named
.gitignore at the base of your repository, and write the following to it…
# Generated by Cargo. These sub-directories will contain compiled files and executables. **/debug/ **/target/ # More information on Cargo.lock: https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html **/Cargo.lock # These are backup files generated by rustfmt. **/*.rs.bk
You may notice that this is nearly identical to GitHub’s default Rust .gitignore file.
.gitignore file is great for a single crate repository.
However, our file contains a few modifications to work with our repository.
**/target/ directories contain our executables and related artifacts, while the
Cargo.lock file contains the exact information about our crate’s dependencies.
Since we are building many sub-crates, we need to make sure that our generated artifacts do not get committed.
We solve this by prepending the double-asterisk string, and… we are done (so far).
With that file created, and the repository set up, let’s catch ourselves a break in advance, and set up our favorite editor for success. I recommend setting up the following…
- Rust Analyzer
- A TOML file plug-in or extension
Rust Analyzer is an incredibly good compiler frontend for the language. It provides best-in-class support for writing Rust code in your preferred editor. While I use it with Neovim, you can use it with most popular editors, and IDEs. For folks without a favorite, I recommend VS Code.
Support for TOML does not come by default in all editors.
Since we use the format frequently with
Cargo.toml, I recommend installing a reputable plug-in, or extension, when working with Rust.
I prefer to use vim-toml (also, with Neovim).
This is where we begin to learn a little bit about Cargo before even reading “The Book”. Let’s take a look at the following excerpt from the Cargo documentation:
“Alternatively, a Cargo.toml file can be created with a [workspace] section but without a [package] section. This is called a virtual manifest. This is typically useful when there isn’t a “primary” package, or you want to keep all the packages organized in separate directories.”
Cargo.toml files are typically used for simple dependency management in your single crate repositories, they can do so much more.
For our use case, we need to have a “meta”
Cargo.toml file to manage our sub-crates in a workspace.
Specifically, we need a “virtual manifest”, as mentioned above.
“What does this do for us?” Glad you asked.
cargo fmt cargo build --verbose
Running the above commands in the base of this repository will format, and compile, the code of every sub-crate in one fell swoop. For your first sub-crates, I recommend using Cargo (and rustc) as “The Book” intends, but after the early tutorials, you can switch to using the virtual manifest.
Before we continue, we have to edit our
Cargo.toml to find all of our sub-crates in the repository.
Create the file, and write the following to it…
[workspace] members = [ "ch*/*" ]
Non-beginners: do not accidentally include “package”, as we need this to be virtual manifest.
I’ve already given away the naming pattern, but the key here is that we want to keep a consistent scheme. We don’t necessarily need to, but we do this so that we do not have to edit this file again.
This section is a little more straightforward.
Our naming scheme, hinted at in the virtual manifest, leads Cargo to look for all base directories beginning with
ch, and target all crates within those directories.
For this to work, we need to make sure that our sub-crates maintain the same scheme.
Cargo offers a sub-command that scaffolds a crate for the user.
A crate can either be compiled to a binary (default), or a library, specified by the
Ultimately, the choice will depend on the specific section’s requirements, and code block examples.
mkdir ch01-name cd ch01-name cargo new ch01-s01-section-one-name cargo new --lib ch01-s02-section-two-name
The above Bash commands can be translated to your preferred GUI workflow.
mkdir: creates a new directory (folder), which stores all sub-crates for a specific chapter.
cd: changes the current working directory (current folder) to make sure that we create our sub-crates in the right place.
Now, within every sub-crate, we can take notes within our new
.rs files, and our
It’s all co-located, and compilable from one location.
Nothing here is dangerous, but misusing a
boilerplate.txt file can lead to bad programming practices, and code hygiene.
I am going to be 100% clear: do not use the following macros without knowing what they do, and why they exist.
“Whoa, what’s a macro? Why are you telling me all this?”
Do not worry!
“The Book” will teach you all about macros, give some examples, and recommend some best practices.
For now, I’ll give some examples of macros to (not hastily) paste at the top of our
Here’s the thing: many sections will often reuse variables, or not use those variables at all after showing them in the code block examples. It can be annoying to have clogged STDOUT from compiler warnings when you are trying to follow these examples. This is to no fault of the book authors (love you Carol, Steve, and all other contributors!). We leverage it as housekeeping that comes along with using our learning methodology. Essentially, our goal is to minimize compiler warning clutter when we purposefully want to ignore warnings.
As I mentioned earlier in this post, the key to this method is minimize friction when taking notes and learning.
Therefore, I recommend keeping a
boilerplate.txt file handy, which should contain useful macros for de-cluttering your STDOUT.
Here are some examples…
#![allow(dead_code)] #![allow(unused_imports)] #![allow(unused_variables)] #![allow(unused_assignments)]
I strongly recommend encountering frequent warning clutter first before using these macros.
All of my warnings aside, I get a lot of use out of my pet
boilerplate.txt at the base of my repository.
It works well, and keeps me moving from section to section.
With our learning environment ready to go, you might have noticed that this project can be cleanly translated into your favorite CI system. This step is optional, but for those familiar with GitHub Actions, Travis CI, CircleCI, etc., it might be worth implementing here.
This code is not continuously deployed, nor is it (most likely) in use by anyone other than you. I believe that you can forgo using a CI pipeline here, and save on the compute cost. However, I still wanted to mention the option.
I hope that this method works for (at least some) newcomers and aspiring intermediate/expert users alike. Please let me know what you think, and how this method works out for you!
I’m looking forward to all the new friends joining the Rust community in the time following RustConf 2020. Have fun reading The Rust Programming Language, whether it’s for the first or one hundredth time, and catch you at the conference!