About



This is my collection of notes and supplementary material for various projects, including my (primarily) programming-related videos.
Update log 2026
March 9th, 2026
Released: Intro to Odin - Code Examples
Currently in the works: Intro to Zig - Code Examples (Ziglings)
March 4th, 2026
Recently released: Intro to Odin programming language
Currently in the works: Intro to Odin - Code Examples
February 17th, 2026
Going forward, I intend to use this markdown book as the central place for all of my projects, including any writing, links to project repos, plus links and notes for my Youtube videos.
I have a lot of new stuff in the works, which I’ll announce here on the “blog” page(s) as they’re ready.
Recently released: video and notes about Jujutsu version control
Currently in the works: some videos and notes about the Odin programming language (videos should be up by end of month)
Programming Fundamentals

- every programming language in 15 minutes
- survey of programming languages
- text and numbers
- data structures
- searching and sorting algorithms
- Hardware and Operating System Basics
- Unix system calls
- Unix terminals and shells
- Object-Oriented Programming concepts
- the Internet
- cryptography
- relational databases
- version control with Mercurial
Web Programming

- HTML and CSS
- server-side web programming
- Go web app starter guide and GopherJS (Go compiled to Javascript)
- text search with Lucene
Programming Languages

- intro to programming with Go
- the Go language (unlike the intro programming with Go series, this assumes prior programming knowledge)
- the Clojure language
- the C language
- the Javascript language
- the Java language
- the Python language
Odin Intro - Data Types and Polymorphism
This text is the supplementary notes for a series of videos that introduce data types and polymorphism in the Odin programming language:
Warning
The videos and this text assume no prior knowledge of Odin itself, but they assume the audience already has some familiarity with C (or other languages with pointers, such as C++, Rust, Zig, or Go). Also be clear that this text is intended to be read after having first watched the videos.
For more about Odin, see also:
- the Odin docs overview
- the Odin docs demo
- Understanding the Odin Programming Language by Karl Zylinski
Basic Number Types
Integers in Odin come in five different sizes with both signed and unsigned types:
i8 | signed | 8 bits |
i16 | signed | 16 bits |
i32 | signed | 32 bits |
i64 | signed | 64 bits |
i128 | signed | 128 bits |
u8 | unsigned | 8 bits |
u16 | unsigned | 16 bits |
u32 | unsigned | 32 bits |
u64 | unsigned | 64 bits |
u128 | unsigned | 128 bits |
There’s also the types int and uint, which are generally your default choices. Their sizes depend on the target platform you’re compiling for, e.g. when compiling for x64, an int or uint will be 64 bits.
There’s also the type called byte, which is actually just an alias for u8.
Floating-point numbers come in three sizes:
f16 | 8 bits |
f32 | 32 bits |
f64 | 64 bits |
Booleans
Booleans also come in multiple sizes:
b8 | 8 bits |
b16 | 16 bits |
b32 | 32 bits |
b64 | 32 bits |
While, in principle, representation of a boolean only requires a single bit, Odin has these multiple sizes mainly to allow easier interop with various binary formats and to allow you to better control padding and alignment in structs. Most of the time, however, you’ll simply default to using the type called bool. Like a b8, a bool is 8-bits in size (though the compiler considers b8 and bool to be distinct types).
Strings
The primary string type, called string, represents a UTF-8 encoded string, and the type called string16 represents a UTF-16 encoded string.
Concretely, a string or string16 value is actually a pointer to a buffer of characters and an integer representing the length of the text. So when you assign, pass, or return a string value, what’s actually being copied is just a pointer and integer, not the actual character data.
For ease of interop with C, Odin also has types cstring and cstring16. These cstring types have no integer to represent length because they instead use the C convention of signaling the end of the character data with a 0 byte.
Odin also has another integer type called rune that represents the unicode codepoint of an individual character.
Important
Because Odin is not a garbage collected language, you should keep in mind how the character buffers pointed to by strings are allocated. For a string literal, the character buffer is statically allocated, meaning the data resides alongside the code of the executable itself. For any string created at runtime, the character buffer must be allocated dynamically.
Zero values
Data types in Odin have a concept of a “zero value”, meaning the value of the type where every bit is 0. When a variable is left uninitialized, it defaults to the zero value.
Zero values by type:
| Type | Zero value |
|---|---|
| numbers | 0 |
| booleans | false |
| strings | empty string |
| pointers | nil |
| structs | all fields are all their zero values |
| enums | 0 (enums are represented in memory as integers) |
| unions | nil (unless the union type is declared with certain directives) |
Casts
Compared to C and some other languages, Odin is much stricter about explicit casting, disallowing most implicit conversions to help prevent absent-minded mistakes. For example, to assign an i32 value to an i64 variable, the cast cannot be left implicit.
Distinct types and aliases
The double colon syntax in Odin denotes a compile-time definition, either of a constant, a procedure, or type:
// defines Also_Int as an alias for int
Also_Int :: int
// defines My_Int as a type that is like int but
// considered separate by the compiler
My_Int :: distinct int
A distinct type by be explicit cast to and from its doppelganger:
// the explicit cast is required here because My_Int is distinct from int
// declare an int variable 'i' and assign it the value 3
i: int = 3
// declare a My_Int variable 'j' and assign it the value of i
j: My_Int = My_Int(i)
Literals
Literals in Odin have their own distinct types, which a bit confusingly are called the “untyped” types. Integer literals are untyped integers, floating-point literals are untyped floats, boolean literals are untyped booleans, and string literals are untyped strings. These special untyped types have a few special rules:
- They only exist at compile time, so you can’t, say, create a variable with one of these untyped types.
- These types can be implicitly cast to their related types.
- Casts of a literal perform range checks.
Some example casts:
x: f32 = 14 // untyped int implicitly cast to f32
y: u8 = 9 // untyped int implicitly cast to u8
y = 1000 // compile error! 1000 is not in the range of a u8
a: bool = false // untyped boolean implicitly cast to bool
b: b16 = true // untyped boolean implicitly cast to b16
c: b64 = true // untyped boolean implicitly cast to b64
s: string = "hello" // untyped string implicitly cast to string
s16: string16 = "hello" // untyped string implicitly cast to string16
cs: cstring = "hello" // untyped string implicitly cast to cstring
When a variable’s declared type is inferred from a literal:
i := 3 // inferred to be int
f := 3.5 // inferred to be f64
b := true // inferred to be bool
s := "hi" // inferred to be string
Pointers
A pointer is a value that represents a memory address. To dereference a pointer is to access the value at the memory address represented by the pointer.
A pointer value is typed as a specific kind of pointer, e.g. an int pointer is intended to represent the memory addresses of only ints, or a string pointer is intended to represent the memory addresses of only strings, etc. Thanks to pointers being typed and Odin’s static typing, the compiler can know the type of value at the address represented by the pointer. For example, dereferencing an int pointer accesses an int value rather than some kind of other value.
The ^ operator on the right side of a pointer expression dereferences the pointer. The & (reference) operator on the left side of a storage location expression (e.g. a variable) returns its address:
p: ^int // declare a variable 'p' which is an int pointer
i: int = 7
p = &i // assign the address of i to p
x: int
x = p^ // assign 7 (the dereference of p) to x
p^ = 3 // assign 3 to the dereference of p (a.k.a. the address stored in p)
Note
Odin uses the
^symbol instead of C’s traditional*. Also unlike C, Odin puts the dereference operator on the right. Placing it on the right works out nicely when pointers are used in combination with arrays.
rawptr
A rawptr is Odin’s closest analog of a C void pointer. Unlike other pointers, a rawptr can represent the address of any kind of value.
Other pointer types can be implicitly cast to rawptr, but a cast from rawptr to other pointer types must be explicit.
uintptr
Like a rawptr, a uintptr is a pointer that can represent the address of any kind of value, but unlike a rawptr, a uintptr can be used as an unsigned integer in arithmetic operations.
Note
Unlike C or other C-like languages, Odin doesn’t let us do arithmetic directly on pointers, but instead we can convert a pointer into a uintptr, perform the arithmetic, and then cast back to a pointer. More commonly, though, a multi-pointer is used instead.
multi-pointers
What Odin somewhat oddly calls multi-pointers are pointers that can be indexed like an array to do pointer arithmetic. (Compared to a uintptr, a multi-pointer is a bit more convenient and less error prone.)
m: [^]int // a multi-pointer to int
i: int
m = &i // implicit cast from ^int to [^]int
m[3] = 100 // unsafe! store int 100 at address of m + size_of(int) * 3
i = m[-5] // unsafe! assing to i the int read from the address of m - size_of(int) * 5
Warning
Always keep in mind that arbitrarily indexing memory is fundamentally unsafe, as in this example where we are jumping to meaningless locations on the call stack. In real use cases, multi-pointers should generally only be used to access addresses within known allocated blocks of memory.
Arrays
Arrays in Odin are fixed-size, homogenous, and either stack-allocated or globally allocated.
// decare variable 'arr' to be an array of 5 ints
arr: [5]int
// assign to arr a literal of 5 ints
arr = [5]int{1, 2, 3, 4, 5}
// shorthand for prior (the size and type inferred is from the target)
arr = {1, 2, 3, 4, 5}
// declare a variable 'nums' to be an array of 3 ints
// (the size is inferred from the number of values)
nums := [?]int{11, 22, 33}
arr: [100]string // an array of 100 strings
// an array literal with explicit indexes (can be partial and out of order)
arr = {
4 = "apple",
1 = "banana""
3 = "orange",
}
// same effect as above
arr[4] = "apple"
arr[1] = "banana"
arr[3] = "orange"
arr: [100]string // an array of 100 strings
// indexes in an array can be ranges
arr = {
4 = "apple",
10..=12 = "banana", // 10 through 12 (inclusive)
80..<82 = "orange", // 30 through 30 (non-inclusive)
}
// same effect as above
arr[4] = "apple"
arr[10] = "banana"
arr[11] = "banana"
arr[12] = "banana"
arr[80] = "orange"
arr[81] = "orange"
Whereas an array variable in C is actually a constant pointer value, this is not the case in Odin. An Odin array is a proper value unto itself, and so arrays are assigned, passed, compared, and returned by value, not by reference. When we assign one array variable to another, the entire array is copied, and if we compare two arrays for equality, all of their corresponding indexes are compared for equality.
When you do want to assign, pass, or return arrays by reference, you can do so with array pointers or with slices, which we’ll cover in a moment.
By default, array indexing in Odin is bounds checked both at compile time and runtime:
arr: [5]bool
arr[100] = true // compile time bounds check error
i := 100
arr[i] = true // runtime bounds check panic
When we try to assign to index 100 of this 5 bool array, the compiler gives us a compilation error because it knows the compile time value 100 is out of bounds for this array. If though we index an array with a runtime expression, the bounds check happens at runtime, so indexing this array with a variable whose value will be 100 will trigger a panic.
These runtime bounds checks of course incur some degree of overhead, so in some performance-critical contexts you may wish to disable them with the #no_bounds_check directive:
arr: [5]bool
// no bounds checks will be performed in this block
#no_bounds_check {
i := 100
arr[i] = bool // no panic but unsafe at runtime
}
Slices
A slice in Odin is a value that represents a subrange of an array (or alternatively, an array-like buffer that stores contiguous, homogeneous values). Concretely, a slice contains a pointer to the start of the subrange plus an integer for the length of the subrange.
Warning
For Go programmers, it’s important to note that, unlike Go slices, Odin slices do not contain a capacity, and there is no append operation for slices. Odin’s closest equivalent of a Go slice is called a dynamic array.
// declare 's' to be a slice of ints
s: []int
arr: [100]int
// from the array, get a slice starting at index 30 and ending at index 40
s = arr[30:40]
// length of the slice is 10
assert(10 == len(s))
// because index 0 of this slice is the same as index 30 of the array,
// these two assignments assign to the same location in memory
s[0] = -99
arr[30] = -99
As a convenience, the first integer of a slice operation can be omitted, in which case it defaults to 0, and the second integer can also be omitted, in which case it defaults to the length of the array.
Allocations
Because Odin is not a garbage collected language, the programmer is responsible for allocating and deallocating any heap memory they want to use.
For example, if we want to create a slice whose referenced data resides on the heap, we can call the make_slice procedure (from the base library), which returns a slice that references newly allocated heap memory. When we’re done with a heap-allocated slice, we should call delete_slice (from the base library) to deallocate the slice’s heap memory:
s: []int
// returns a slice with newly allocated buffer of 10 ints
s = make_slice([]int, 10)
// deallocates the allocated buffer referenced by the slice
delete_slice(s)
Note
The Odin base library also has procedures
makeanddelete. These proc group procedures are the generally preferred shorthand for invoking all variants of themake_x/delete_xprocedures.
Whereas before we were creating slices that referenced the memory of stack-allocated arrays, here the slice references heap allocated memory with no array involved. (And be clear that the slice variable itself is still stack allocated.)
The make_slice procedure, the delete_slice procedure, and all other allocating or deallcating procedures let you pass an allocator. Different allocators can track their allocations in different ways, and some allocators may perform better than others in different use cases.
When no allocator is explicitly passed to these procedures, they implicitly use the allocator provided by the context. The code below is functionally the same as the code above:
s: []int
// explicitly pass the context allocator
s = make_slice([]int, 10, context.allocator)
// explicitly pass the context allocator
delete_slice(s, context.allocator)
See more about the context and allocations in Odin.
Dynamic Arrays
Whereas a normal Odin array is fixed in size, a dynamic array has no fixed size and so can grow and shrink. Concretely, a dynamic array value resembles a slice in that it consists of a pointer and a length, but in addition, a dynamic array also has an integer representing its capacity and a reference to an allocator:
// the reserved word 'dynamic' makes this a dynamic array
arr: [dynamic]int
// returns a dynamic array with a newly allocated buffer of 7 ints,
// a logical length of 4, and a reference to the context allocator
arr = make_dynamic_array_len_cap([dynamic]int, 4, 7)
assert(4 == len(arr))
assert(7 == cap(arr))
assert(context.allocator == arr.allocator)
delete_dynamic_array(arr) // deallocate from the referenced allocator
Note
Whereas slices often reference subranges of stack-allocated arrays, that is not an intended use case for dynamic arrays. Instead, the data referenced by a dynamic array is normally heap-allocated via base library procedures.
By virtue of storing a capacity integer and allocator reference, a dynamic array allows us to append values with the apppend_elems procedure:
arr: [dynamic]int
arr = make_dynamic_array_len_cap([dynamic]int, 4, 7)
// this append stays within the existing capacity
append_elems(&arr, 100, 101, 102)
assert(7 == len(arr))
assert(7 == cap(arr))
// this append exceeds the capacity, so:
// 1. a new, larger buffer is allocated
// 2. the existing values are copied into this new buffer
// 3. the new elements are added to the new buffer
// 4. the original buffer is deallocated
append_elems(&arr, 123, 456)
assert(9 == len(arr))
assert(9 <= cap(arr))
Note
The Odin base library also has an
appendproc group procedure, which is generally preferred shorthand for invoking all variants of theappend_xprocedures.
Maps
Maps are hashmaps of key-value pairs. Concretely, a map value consists of a pointer to a block of memory where the key-value pairs reside, an integer indicating the number of key-value pairs, and a reference to an allocator.
Before using a map, we must allocate it. Any time new keys are added to the map, the map’s memory may be reallocated. Like all allocated things, we generally should eventually deallocate it when we no longer need it.
// declare a variable 'm' which is a map of string keys and int values
m: map[string]int
m = make_map(map[string]int) // allocate memory for the map
m["hi"] = 5 // adds new key "hi" with value 5 (may reallocate)
assert(1 == len(m))
m["hi"] = 7 // sets value of the existing key
delete_key(&m, "hi") // removes the existing key and its value
assert(0 == len(m))
delete_map(m) // deallocate the map when it's no longer needed
Structs
Like in C and other C-like languages, a struct in Odin is a composite data type that consists of named members called fields.
// define a type named 'Cat' which is a struct consisting of two fields
Cat :: struct {
a: int, // field 'a' is an int
b: f32, // field 'b' is an f32
}
cat: Cat // declare a variable 'cat' of type Cat
cat.a = 5 // assign to the 'a' field of 'cat'
cat.b = 3.6 // assign to the 'b' field of 'cat'
// assign a Cat literal (where 'b' is 3.6 and 'a' is 5) to 'cat'
cat = Cat{b = 3.6, a = 5}
// omitted fields default to zero values (so 'a' is 0 and 'b' is 0)
cat = Cat{}
// the literal type can be inferred from the assignment context
cat = {a = 5, b = 3.6}
Anonymous structs
Rather than give every struct type a name, it’s sometimes more convenient to use anonymous struct types:
// declare variable 'anon' with anonymous struct type having two fields
anon: struct {a: int, b: f32}
// assign an anonymous struct literal to 'anon'
anon = {a = 5, b = 3.6}
Cat :: struct {
a: int,
b: f32,
}
cat: Cat
// cast an anonymous struct value to Cat
// (valid because they have the same set of field names and types)
cat = Cat(anon)
// cast a cat value to the anonymous struct
anon = struct{a: int, b: f32}(cat)
Anonymous structs are particularly convenient for fields in other structs. Here this Dog struct has a field named nested that is itself an anonymous struct, and we can then read and write the fields of the nested struct individually or as a complete struct.
Dog :: struct {
x: string,
nested: struct {a: int, b: f32}, // anonymous struct field
}
dod: Dog
// we can assign to individual fields of an anonymous struct member...
dod.nested.a = 3
// ... or we can assign a whole anonymouse struct value
dod.nested = struct {a = 5, b = 3.6}
The semantics would be exactly the same if we defined a named struct type to use for the field, but the inner anonymous struct effectively allows us to logically group fields in the outer struct with less hassle.
Enums
An enum in Odin is an integer type with discretely named compile time values.
// declare an enum type 'Direction' with four named u32 values
Direction :: enum u32 {
North = 0,
East = 1,
South = 2,
West = 3,
}
// declare a variable 'd' of type Direction
d: Direction
d = Direction.South // assign .South (2) to 'd'
assert(2 == u32(d))
If an enum’s integer type is left unspecified, it defaults to int.
If we omit the value for the first named value, it defaults to 0, and then any subsequent omitted value will default to 1 greater than the prior value.
Direction :: enum { // defaults to int
North, // first value defaults to 0
East = 1337,
South, // defaults to 1338 (prior value plus 1)
West = -100,
}
Effectively, if we omit all the values, they will run from 0 up through 1 less than the count of named values.
Direction :: enum { // defaults to int
North, // 0
East, // 1
South, // 2
West, // 3
}
In a context where an enum value is expected, such as in an assignment to an enum variable, we can omit the name of the enum type before the dot as shorthand:
d: Direction
d = .South // Direction.South
Normally we only want to use the named values of an enum, but we can actually cast any integer value into an enum type.:
d: Direction
d = Direction(9) // OK, even though there is no named Direction value for 9
We can even do arithmetic with enum values (though there aren’t many cases where this is useful):
d: Direction
d = Direction.West + Direction.East // 1 + 3 is Direction(4)
In a for loop, we can loop over every named value of enum type in the order they listed in the enum definition:
// loop over all named values of an enum type, printing:
// North 0
// East 1
// South 2
// West 3
for d, index in Direction {
fmt.println(d, index)
}
We can also switch on enum values, such as here where this switch will execute the case corresponding to the value of this Direction variable. Note that we can use shorthand for the enum values in each case:
d: Direction
// ...assign a value to d
switch d {
case .East:
// d is .East
case .North:
// d is .North
case .South:
// d is .South
case .West:
// d is .West
}
By default, Odin strictly demands that an enum switch have a separate case for every named value, so here when we omit cases for North and West, we’ll get a compilation error. However, if we add the #partial directive to our switch, Odin will allow us to omit cases, and we can also then have a default case:
d: Direction
// ...assign value to d
// #partial required here because we do not
// have a case for every named value
#partial switch d {
case .East:
// d is .East
case .South:
// d is .South
case:
// the default case (allowed by #partial)
// d is either .North or .West
}
To get an enum value name as a string, we can call a procedure from the reflect package. The procedure enum_name_from_value returns the name of an enum value as a string. The procedure also returns a boolean that will be false if the enum value has no name:
d: Direction = .South
if name, ok := reflect.enum_name_from_value(d); ok {
fmt.println(name) // prints "South"
}
Using procedure enum_from_name from the reflect package allows us to go the other way: we can get an enum value from a string matching the value’s name.
// if the string doesn’t match a named value of the
// specified enum type, the returned boolean will be false.
if d, ok := reflect.enum_from_name(Direction, "South"); ok {
fmt.println(int(d), d) // prints "2 South"
}
Enumerated arrays
An enumerated array is a readonly array with values fixed at compile time and which is indexed not by number but by the named values of an enum type:
Direction :: enum { North, East, South, West }
// Declare 'direcitons_spanish' as an enumerated array of four strings.
// The four indices correspond to the named values of the Direction enum.
directions_spanish :: [Direction]string {
.North = "Norte",
.East = "Este",
.South = "Sur",
.West = "Oeste",
}
str: string
str = directions_spanish[.North] // "Norte"
Unions
A union is a data type defined as a set of “variant” types:
- A union value can contain a single value of any of its variant types.
- The size of a union value is large enough to store the union type’s largest variant.
- By default, a union value also stores a “tag”, an integer that indicates the variant stored in the value.
Cat :: struct {}
Dog :: struct {}
Bird :: struct {}
// declare a 'Pet' as a union of Cat, Dog, and Bird
Pet :: union { Cat, Dog, Bird }
// assume that Cat is denoted by tag 1,
// Dog by tag 2, and Bird by tag 3
pet: Pet
// variants of Pet can be implicitly cast to Pet
// assign 'pet' a Pet value containing the zero Cat value and tag 1
pet = Cat{}
// assign 'pet' a Pet value containing the zero Dog value and tag 2
pet = Dog{}
Note
In our example, the variant types of the union are all structs, but other kinds of types can also be variants in a union: numbers, strings, pointers, enums, etc. Even unions themselves can be variants of other unions.
While variants of a union can be implicitly cast to the union type, we cannot cast the other way around, even explicitly. Instead, to get the variant value held in a union value, we must use a type assertion:
pet: Pet
dog: Dog
// implicit cast from Dog to Pet
pet = dog
// this type assertion gets the Dog from the union value
dog = pet.(Dog)
bird: Bird
// this type assertion panics because the union
// value does not hold a Bird
bird = pet.(Bird)
ok: bool
// returns the Bird zero value and false because
// the union value does not hold a Bird
bird, ok = pet.(Bird)
// returns the held Dog value and true
dog, ok = pet.(Dog)
By default, a union’s zero value is nil, which has tag 0.
// an uninitialized union variable has value nil
pet: Pet
assert(pet == nil)
When we want to handle multiple variants stored in a union value, it’s generally more convenient to use a type switch:
pet: Pet
// ... assign a value to pet
// this type switch stores the variant value from 'pet' in new variable 'p',
// whose type differs in each case
switch p in pet {
case Cat:
// p is a Cat
case Dog:
// p is a Dog
case Bird:
// p is a Bird
}
The #partial directive allows a type switch to omit variants of the union and optionally include a default case:
pet: Pet
// ... assign a value to pet
#partial switch val in pet {
case Cat:
// p is a Cat
case:
// the default case (covers Dog, Bird, and nil)
// p is a Pet
}
Error values
Unlike many other languages, Odin has no exception mechanism. It does, though, have runtime “panics”, which are triggered by some operations, such as failing bounds checks. Panics will unwind the call stack, but there is no way in the language to catch and recover from these panics except to do some logging and cleanup before the program terminates.
[!WARNING] Panics are not a mechanism for normal error handling! An error represents a non-ideal eventuality beyond your programs’s control. A panic represents a bug in your code.
Normal errors in Odin are represented as ordinary data values, and these errors should follow three strong conventions:
- Error values are always represented either as boolean, enum, or union types.
- Procedures which return multiple values should return the error (if any) as the last return type.
- The zero-value of an error indicates success (i.e. the absence of an error). A non-zero value indicates some kind of error occurred.
Example of a boolean error
import "base:strconv"
num, err := strconv.parse_f64("-52.97")
if ok {
// could not parse string as an f64
}
Example of an enum error
A number of library procedures that perform allocations use this Allocator_Error enum to signal allocation errors. Because allocations may fail in multiple ways, it’s useful to convey that information with an enum instead of just using a boolean to signal that some error has occurred:
// declared in package base:runtime
Allocator_Error :: enum u8 {
None = 0,
Out_Of_Memory = 1,
Invalid_Pointer = 2,
Invalid_Argument = 3,
Mode_Not_Implemented = 4,
}
Typically the enum error value returned by a procedure should be handled by a switch:
import "core:mem"
data, err := mem.alloc(100)
switch err {
case .None:
// ... (.None indicates no error occurred)
case .Out_Of_Memory:
// ...
case .Invalid_Pointer:
// ...
case .Invalid_Argument:
// ...
case .Mode_Not_Implemented:
// ...
}
Important
Note that we don’t use a #partial switch here, so the compiler forces us to cover every named value of the enum. It’s unwise to ignore errors, so it’s generally best to avoid #partial switches when processing enum and union error values.
Example of a union error
While enum errors provide more information than a simple boolean, we sometimes want an error value with other kinds of information, such as string messages, and this is where union errors become useful. The variant types of a union can be anything, such as strings, structs, other unions, or whatever, so a union error can hold any information we need it to:
// declared in package core:os
Error :: union #shared_nil {
os.General_Error,
io.Error,
runtime.Allocator_Error,
os.Platform_Error,
}
Typically the union error value returned by a procedure should be handled by a type switch:
import "core:os"
import "core:io"
import "base:runtime"
file, err := os.open("path/to/file")
switch e in err {
case os.General_Error:
// ...
case io.Error:
// ...
case runtime.Allocator_Error:
// ...
case os.Platform_Error:
// ...
}
or_return
Very commonly with error values, we want to immediately return the error if it is non-zero. Here we’re getting the error returned from the procedure then immediately returning it if it is non-zero:
num, okr := strconv.parse_f64("-52.97")
if !ok {
return ok // return the error
}
This pattern is so common that Odin provides the or_return operator as shorthand for the same logic:
// same effect as prior example
num := strconv.parse_f64("-52.97") or_return
In a single-return procedure, the left operand of an or_return must match the procedure’s return type.
In a multi-return procedure, the return values must be named and the error type must be last:
foo :: proc() -> (x: int, y: string, err: bool) {
x = 3
y = "hi"
err = false
// if bar returns true, this returns 3, "hi", and true
bar() or_return
// ...
}
or_break
The or_break operator is basically like or_return except it performs a break rather than a return:
num, ok := strconv.parse_f64("-52.97")
if !ok {
break
}
// same effect as above
num := strconv.parse_f64("-52.97") or_break
or_continue
The or_continue operator is like or_break except it performs a continue rather than a break:
num, ok := strconv.parse_f64("-52.97")
if !ok {
continue
}
// same effect as above
num := strconv.parse_f64("-52.97") or_continue
or_else
Lastly there is or_else, which unlike the other operators, takes a second operand on its right. The right operand is only evaluated if the left operand returns a non-zero error, and then the or_else expression evaluates into the right operand value instead of the left operand. In effect, an or_else lets us conveniently substitute a default value in the event of an error:
num, ok := strconv.parse_f64("-52.97")
if !ok {
result = 123
}
// same effect as above
num := strconv.parse_f64("-52.97") or_else 123
What is polymorphism?
Wikipedia defines polymorphism succinctly:
“…polymorphism allows a value type to assume different types.”
Where otherwise a single thing could only be one concrete type or behave in one way, polymorphism allows a thing to potentially vary in type and behave in different ways.
Another way to think of it: mechanisms of polymorphism allow us to express that a piece of code has variants that are similar but somehow different. In this sense, mechanisms of polymorphism enable abstraction building beyond what just plain procedures and plain data types allow, i.e. polymorphism gives us additional ways to generalize.
Now, how much one should attempt to abstract is a very debatable question, but polymorphism is useful to have in your toolset. In particular, we very commonly need the ability to store heterogenous data in a collection and then also need the ability to operate upon these heterogenous elements when iterating the collection. In C#, for example, we can create an array of Pets that may store any kind of Pet, whether a Cat or Dog, and then we can iterate through the collection and perform a common operation on every Pet regardless of its concrete type:
// C#
Pet[] pets = new Pet[2];
pets[0] = new Cat();
pets[1] = new Dog();
foreach (var p in pets) {
p.sleep(); // dynamic dispatch
}
This is enabled either by virtue of Cat and Dog inheriting from class Pet or by Cat and Dog implementing an interface Pet. Odin, however, lacks inheritance, interfaces, and other common polymorphism-related language features. So we’ll look at how this and similar problems can be solved in Odin by other means.
Compile time polymorphism
It’s helpful to distinguish between compile time polymorphism and runtime polymorphism, not just because their implementations differ but but also because they serve quite different purposes. Compile time polymorphism serves two purposes:
- deduplicating code
- overloading names
In Odin, compile time polymorphism is enabled through a few features:
- procedure groups
- parametric polymorphic procedures
- parametric polymorphic structs
- parametric polymorphic unions
- the
usingmodifier for struct fields
Procedure groups
Procedure groups, very simply, are procedures that are defined not as a body of code but rather as a list of other procedures. At compile time, a call to a procedure group dispatches to the procedure in its list that matches the number and types of arguments in the call.
sleep_cat :: proc(cat: Cat) { /* ... */ }
sleep_dog :: proc(dog: Dog) { /* ... */ }
// a proc group 'sleep`
sleep :: proc { sleep_cat, sleep_dog }
sleep(Cat{}) // one Cat argument, so invokes sleep_cat
sleep(Dog{}) // one Dog argument, so invokes sleep_dog
Proc groups give us the stylistic and organizational convenience of overloading a procedure name so that we can use a single name at the call sites. Unlike overloading in other languages, however, we still have to give the individual overloads their own names.
Parametric polymorphic procedures (generic functions)
Parametric polymorphic procedures are Odin’s semi-equivalent of generic functions in other languages. A procedure is parameteric polymorphic if it has any parameters whose arguments and/or types are fixed for each call at compile time.
Parameters that require compile time arguments
A parameter which requires a compile time expression argument is denoted by a $ prefix on the parameter name:
foo :: proc($x: int) { /* ... */ }
foo(3) // valid because 3 is a compile time expression
i := 3
foo(i) // compile error: argument is not a compile time expression
One way a compile time argument can be useful is to specify array sizes:
// the argument for 'n' must be compile time expression,
// but this allows us to use 'n' as an array size
make_array :: proc($n: uint) -> [n]f32 {
arr: [n]f32
return arr
}
arr_A := make_array(3) // returns an array of 3 float 32s
arr_B := make_array(7) // returns an array of 7 float 32s
A compile time argument can also allow some expressions to be evaluated at compile time:
mul :: proc($val: f32) -> f32 {
return val * val; // val * val is evaluated at compile time
}
To get a similar effect as what other languages call a type parameter, we can use typeid parameters that require compile time arguments:
Note
Every unique type in your program is given a unique integer id called a
typeid. Type names themselves are compile timetypeidexpressions, and the builtin proceduretypeid_ofreturns thetypeidof its single argument’s type, e.g.typeid_of(Cat{})returns thetypeidof Cat.
// (slightly simplified version of runtime.new)
// 'T' is a compile time typeid expression, so it can be used like a type name
// Effectively, this one procedure can return any kind of pointer.
my_new :: proc($T: typeid) -> (^T, runtime.Allocator_Error) {
return runtime.new_aligned(T, align_of(T))
}
int_ptr: ^int
int_ptr, _ := my_new(int)
bool_ptr: ^bool
bool_ptr, _ := my_new(bool)
Note
For a parameteric polymorphic procedure, separate versions of procedure are compiled for each unique call signature. For instance, in the above example, the two calls to
my_newactually invoke different code: one for which T is an int and one for which T is a bool.
Parameters with caller-determined types
When a parameter’s type is prefixed with a dollar sign, that indicates that the parameter’s type is determined at compile time by the type of the argument from the caller:
// the arguments to 'val' can be a runtime expression of any type,
// and T can be used as a type name
repeat_five :: proc(val: $T) -> [5]T {
arr: [5]T
for _, i in arr {
arr[i] = val
}
return arr
}
bool_arr: [5]bool
bool_arr = repeat_five(true)
assert(bool_arr == [5]{true, true, true, true, true})
str_arr: [5]string
str_arr = repeat_five("hi")
assert(str_arr == [5]{"hi", "hi", "hi", "hi", "hi"})
Note
Again, separate versions of a procedure are compiled for each unique call signature, so in the above example, the two calls to
repeat_fiveinvoke different code: one for which T is a bool and one for which T is a string.
Warning
Don’t be confused that we used “T” as the name for the
typeidparameter name earlier but here now use “T” as the name for the parameter type itself. In the former case, the type “T” is determined by thetypeidvalue passed as argument; in the latter case, the type “T” is determined by the type of the passed argument.
A caller-determined parameter type can be used as the type of subsequent parameters in the parameter list:
// in each call, 'min' and 'max' will have the same type as 'val'
// ($ should only prefix the first T parameter)
clamp :: proc(val: $T, min: T, max: T) -> T {
// for the procedure to compile,
// T must be valid operands of <= and =>
if val <= min {
return min
}
if val >= max {
return max
}
return val
}
clamped_int := clamp(int(8), 2, 5) // T is int
clamped_float := clamp(f32(8.3), 2, 5) // T is f32
// compile error: T cannot be a boolean
clamped_bool := clamp(true, false, false)
Parameters with both compile time arguments and caller-determined types
An individual parameter can both require a compile time argument and get its type from the caller’s argument:
// the array size is determined by the value passed to 'n',
// and the type of the array is determined by the type passed for 'n'
array_n :: proc($n: $T) -> ^[n]T {
return runtime.new([n]T)
}
arr_A: ^[3]int
arr_A = array_n(3)
arr_B: ^[5]u8
arr_B = array_n(u8(5))
where clauses
A where clause effectively allows us to restrict which types or compile time values are allowed for a procedure. The where clause of a procedure takes a compile time boolean expression which is evaluated for each call of the procedure. If the expression evaluates false, the call triggers a compilation error.
// the where clause's boolean expression determines
// if each call is valid at compile time
// (type_is_numeric returns true if its argument is a numeric type)
clamp :: proc(val: $T, min: T, max: T) -> T where intrinsics.type_is_numeric(T) {
if val <= min {
return min
}
if val >= max {
return max
}
return val
}
// OK: int is valid for T because it is numeric
clamped_int := clamp(8, 2, 5)
// compile error: string is invalid for T because it is not numeric
clamped_string := clamp("banana", "orange","apple")
Specialization
For the most part, specialization is just shorthand syntax for what you can otherwise express in a where clause, but unlike a where clause, specialization can introduce new type parameters:
// slash after T indicates a specialization of T,
// in this case the additional requirement that T is a slice of E
// (where E is its own type parameter)
sum :: proc(val: $T/[]$E) -> T where intrinsics.type_is_numeric(E) {
// ...
}
// valid: []int is a slice of a numeric type
i := sum([]int{8, 2, 5})
// compile error: not numeric
i = sum([]bool{true, false})
// compile error: not a slice
i = sum(8)
Note
‘E’ stands for Element, as in ‘element of a slice’, so it is the conventional name in this situation.
In this case, though, we could express the same thing more simply with just a where clause and no specialization:
// type_elem_type returns the type of the
sum :: proc(val: $T[]) -> T where intrinsics.type_is_numeric(T) {
// ...
}
Parameteric polymorphic structs
What Odin calls a parametric polymorphic struct is a near equivalent of what other languages would call a generic struct (or a templated struct in C++). The type parameters are expressed as typeid params with $-prefixed name.
// T and U act as effective type parameters
Cat :: struct ($T: typeid, $U: typeid) {
x: T,
y: int,
z: [5]U,
}
c: Cat(f32, string) // variant where $T is f32 and $U is string
c2: Cat(int, string) // variant where $T is int and $U is string
// compile error: c and c2 are different variants of Cat
c = c2
// compile error: Cat itself is not a type
cat: Cat
Important
Despite sharing the same name, variations of the same parameteric struct are distinct, incompatible types.
Aside from compile time typeid params, a struct can also have compile time unsigned integer params, which can be used to specify sizes of arrays in the struct:
// N is a compile time integer parameter, so
// it can be used to specify array sizes
Cat :: struct ($T: typeid, $U: typeid, $N: uint) {
x: T,
y: int,
z: [N]U,
}
c: Cat(f32, string, 4) // variant where $N is 4
c2: Cat(f32, string, 6) // variant where $N is 6
arr: [4]string = c.z
A struct can also optionally have a where clause, whose boolean expression is evaluated for each variant at compile time:
Cat :: struct ($T: typeid, $U: typeid, $N: uint) where N < 10 {
a: T,
b: int,
c: [N]U,
}
// valid because 6 is less than 10
c: Cat(f32, string, 6)
// compile error: invalid because 11 is greater than 10
c2: Cat(int, string, 11)
The most obvious use case for generic types are collections, such as a stack:
Stack :: struct($T: typeid) {
data: [dynamic]T,
}
make_stack :: proc($T: typeid) -> Stack(T) { /* ... */ }
push :: proc(stack: ^Stack($T), val: T) { /* ... */ }
pop :: proc(stack: ^Stack($T)) -> T { /* ... */ }
// make a stack of ints
s := make_stack(int)
// push 4 then 7 to the stack
push(&s, 4)
push(&s, 7)
// remove and return last value from the stack (7)
i := pop(&s)
Parameteric polymorphic unions
Like structs, unions can also take compile time typeid and unsigned integer parameters:
Pet :: union ($T: typeid, $U: typeid, $N: uint) {
T,
int,
[N]U,
}
p: Pet(f32, string, 4) // variant with [4]string
p2: Pet(f32, string, 6) // variant with [6]string
// implicit cast to Pet(f32, string, 4)
p = [4]string{}
// implicit cast to Pet(f32, string, 6)
p2 = [6]string{}
// compile error: p and p2 are different variants of Pet
p = p2
// compile error: Pet itself is not a type
pet: Pet
Note
The main use case for a parapoly union is simply to support parapoly structs with type params in a union: if a variant type in a union has non-concrete type params, then the union itself must have type params that are passed to the variant.
Struct fields with the using modifier
A struct field which is itself of a struct type can be marked with the reserved word using. This modifier doesn’t change the structure of the data at all, but it makes the members of the nested struct directly accessible as if they were fields of the containing struct itself:
Pet :: struct {name: string, weight: f32}
Cat :: struct {
a: int,
b: f32,
using pet: Pet,
}
cat: Cat
cat.pet.name = "Mittens"
cat.name = "Mittens" // same as prior line
Marking a nested struct field with using also means the containing struct type can be used where the nested type is expected as syntatic shorthand for the nested struct:
pet: Pet
pet = cat.pet
// same as prior line (assigns the Pet inside cat, not cat itself)
pet = cat
// assume that procedure feed_pet requires a Pet argument
feed_pet(c.pet)
// same as prior line (actually passes the nested Pet, not the Cat)
feed_pet(c)
A nested struct field marked with using can be given the special name _, which makes the nested struct itself inaccessible by name (though its members can still be accessed individually as if they were members of the containing struct):
Pet :: struct {
x: bool
y : int
}
Cat :: struct {
a: int,
b: f32,
using _: Pet, // this Pet field itself has no name
}
// can still accesss members of the nested Pet as if they belong to Cat directly
cat: Cat
i: int = cat.y
Runtime polymorphism
Whereas compile time polymorphism enables deduplication of code and overloading of names, runtime polymorphism enables us to have dynamically-typed data (including heterogeneous collections) and to operate upon this dynamically-typed data.
Odin enables runtime polymorphism with a few features:
- unions
- untyped pointers
- procedure references
Heterogenous collections
The preferred way to represent collections containing mixed types is with unions:
Cat :: struct{}
Dog :: struct{}
Pet :: union { Cat, Dog }
pets: [10]Pet
for p in pets {
switch p in pet {
case Cat:
sleep_cat(p)
case Dog:
sleep_dog(p)
}
}
Tip
When dealing with larger variant types, it may be preferable to include pointers in the union instead of the type itself,
e.g.union { ^Cat, ^Dog }instead ofunion { Cat, Dog }. On the other hand, using pointers introduces the complication of managing the referenced memory.`
Extensible interfaces
As a solution for runtime polymorphism, unions have two limitations:
- A union is not extensible: you cannot add additional variants to a union without redefining the original definition. This makes it impossible to extend a union brought in from a library whose source you cannot edit (or prefer not to edit).
- The variants held in a union value can only be accessed via type switches or type asserts,
e.g.in the example above, accessing the value held in a Pet required a type switch with explicit cases for Cat and Dog. So even if a union type could be extended with new variants, all existing code that uses the type would have to be edited to account for the new variants.
Runtime polymorphism also requires a way to perform dynamic dispatch, which is not provided by proc groups:
Cat :: struct{}
Dog :: struct{}
Pet :: union { Cat, Dog }
sleep_cat :: proc(cat: Cat) { /* ... */ }
sleep_dog :: proc(dog: Dog) { /* ... */ }
sleep_group :: proc { sleep_cat, sleep_dog }
pet: Pet = Dog{}
switch p in pet {
case Cat:
// compile time type of p is Cat, so sleep_group
// resolves at compile time to sleep_cat
sleep_group(p)
case Dog:
// compile time type of p is Dog, so sleep_group
// resolves at compile time to sleep_dog
sleep_group(p)
}
Parapoly procs don’t provide runtime dispatch either:
// a parapoly procedure where T must be a variant of Pet
para_sleep :: proc(pet: $T) where intrinsics.type_is_variant_of(Pet, T) {
// a 'when' code block is included in the compiled code only if true
when T == Cat {
fmt.println("cat")
}
when T == Dog {
fmt.println("dog")
}
}
// resolves at compile time to the specialization where T is Dog
para_sleep(Dog{})
We can get closer to actual runtime dispatch with procedure references (which are simply what other languages would call function pointer):
add :: proc(a: int, b: int) -> int {
return a + b
}
// variable f is a proc ref
// with signature (int, int) -> int
f: proc(a: int, b:int) -> int
f = add
x := f(3, 5) // same as calling add
So we can use proc references in our Pet example:
Cat :: struct{
// the field 'sleep' is a proc ref for proc that takes a Cat and returns nothing
sleep : proc(Cat)
}
Dog :: struct{
// the field 'sleep' is a proc ref for proc that takes a Dog and returns nothing
sleep : proc(Dog)
}
Pet :: union { Cat, Dog }
// psuedo-methods for each Pet type
sleep_cat :: proc(c: Cat) {}
sleep_dog :: proc(d: Dog) {}
dog : = Dog{ sleep = sleep_dog }
cat : = Cat{ sleep = sleep_cat }
pet: Pet = dog
// we cannot access the .sleep field
// without using a type switch (or type asserts), so we
// are still dispatching on type at compile time, not runtime
switch p in pet {
case Cat:
p.sleep(p)
case Dog:
p.sleep(p)
}
Even if we create a parapoly proc, the dispatch on a union value’s type still happens at compile time:
// a parapoly procedure where T must be a variant of Pet
// and must have a field .sleep_proc that is a proc with a parameter of type T
sleep_para :: proc(pet: $T) where intrinsics.type_is_variant_of(Pet, T) {
pet.sleep_proc(pet)
}
// at compile time, sleep_para requires a Cat or Dog argument, not a Pet,
// so we still need a type switch (or type asserts)
switch p in pet {
case Cat:
sleep_para(pet)
case Dog:
sleep_para(pet)
}
For actual dynamic dispatch, we need not just proc refs but also untyped pointers. Because the runtime type of the pointer’s referant must be tracked, it makes sense to use an any pointer:
Pet :: struct {
sleep: proc(Pet)
data: rawptr
}
Dog :: struct{}
sleep_dog :: proc(pet: Pet) {
dog := (^Dog)(pet.data)
// ...
}
pet : = Pet{
sleep = sleep_dog,
data = Dog{}
}
// dynamically calls sleep_dog
pet.sleep(pet)
Effectively, this pattern establishes an extensible Pet interface: a struct can be said to implement Pet if there is a corresponding sleep_x proc with the correct signature, e.g. a Cat struct implements interface Pet if it has a corresponding sleep_cat.
Note
The method-call syntax familiar from other languages,
x.y(), has no special meaning in Odin. To invokex.y()simply invokes the proc ref stored in field ‘y’ of ‘x’, but no arguments are implicitly passed. Hence, in our example above, the pet variable is passed explicitly.
For an interface that has multiple psuedo-methods proc refs, it is convenient to bundle the proc refs into a single struct:
Dog :: struct{}
// defines the proc refs of the Pet interface
Pet_Procs :: struct {
sleep: proc(Pet)
eat: proc(Pet, int) -> int
}
Pet :: struct {
procs: Pet_Procs
data: rawptr
}
// a constant representing the Dog
// implementation of the Pet interface
dog_procs :: Pet_Procs{
sleep = proc(pet: Pet) {
dog := (^Dog)(pet.data)
// ...
},
eat = proc(pet: Pet, i: int) -> int {
dog := (^Dog)(pet.data)
// ...
},
}
// a Pet instance wrapping a Dog
pet := Pet{
procs = dog_procs,
data = Dog{}
}
// dynamically calls dog_procs.eat
i: int = pet.procs.eat(pet, 4)
Another quality of life affordance with this pattern is to create procedures for ‘conversions’ to the interface wrapper type:
pet_from_dog :: proc(dog: ^Dog) -> Pet {
return Pet { sleep = sleep_dog, data: dog }
}
// proc group for all pet_from_x procs
pet_from :: proc { pet_from_dog }
// get a Pet wrapping a Dog
dog := Dog{}
pet = pet_from(&dog)
pet.sleep(pet)
Not only is this pattern more concise when you need to wrap an implementing type as the interface type, it can help avoid accidents where, say, you accidentally create a Pet with a mismatch of procedures and implementing type:
dog := Dog{}
// danger! mismatch of procedure reference and data type:
// calling sleep on this Pet will call sleep_cat with the
// rawptr referencing a Dog, leading to bad behaviour
return Pet { sleep = sleep_cat, data: &dog }
Odin Intro - Code Examples
As a supplement to the Odin Introduction, here are some walkthroughs of very small Odin code examples.
- These code examples mainly come from the Exercism project and are licensed under the MIT License
- Language features that weren’t covered in the prior material will be explained as they come up.
- For some examples, we include a few tests to demonstrate basics of the testing API.
Setup
The Git repo is here. The examples are found under exercises/practice.
The goal of each exercise is to pass the provided tests. To run the tests for exercise foo, run the command odin test exercises/practice/foo (assuming CWD is root of the repo).
Exercise: Binary Search
The procedure binary_search returns the index within a list of a target value.
package binary_search
// Returns index of the target value in the list.
// Returns false if target value is not in list.
// Assumes list is sorted.
// #optional_ok means the caller doesn't have to capture the returned boolean
// list = a slice of T
// Because T is used in the procedure with the < operator, T must be a numeric type.
find :: proc(list: []$T, target: T) -> (int, bool) #optional_ok {
// indexes denoting start and end of search range
start, end := 0, len(list)
for start < end {
// mid point between start and end
middle := (start + end) / 2
val := list[middle]
if val == target {
return middle, true
} else if target < val {
// if target is left of middle...
end = middle
} else {
// if target is right of middle...
start = middle + 1
}
}
return 0, false
}
Here are some tests from this exercise:
package binary_search
import "core:testing"
// The @(test) attribute marks the procedure as a test.
// A test procedure must have one and only one parameter of type ^testing.T
// The parameter name does not matter, but 't' is used by convention.
// Most core procedures of the testing package take the ^T param as their
// first argument, and this is used to track the test state.
@(test)
/// description = finds a value in an array with one element
test_finds_a_value_in_an_array_with_one_element :: proc(t: ^testing.T) {
input := []u32{6}
result := find(input, 6)
expected := 0
// a test fails if the arguments to expect_value() are not equal
testing.expect_value(t, result, expected)
}
@(test)
/// description = finds a value in the middle of an array
test_finds_a_value_in_the_middle_of_an_array :: proc(t: ^testing.T) {
input := []u32{1, 3, 4, 6, 8, 9, 11}
result := find(input, 6)
expected := 3
testing.expect_value(t, result, expected)
}
@(test)
/// description = finds a value at the beginning of an array
test_finds_a_value_at_the_beginning_of_an_array :: proc(t: ^testing.T) {
input := []u32{1, 3, 4, 6, 8, 9, 11}
result := find(input, 1)
expected := 0
testing.expect_value(t, result, expected)
}
@(test)
/// description = identifies that a value is not included in the array
test_identifies_that_a_value_is_not_included_in_the_array :: proc(t: ^testing.T) {
input := []u32{1, 3, 4, 6, 8, 9, 11}
_, found := find(input, 7)
testing.expect_value(t, found, false)
}
// etc...
Exercise: Pangram
The is_pangram procedure determines if its string argument contains every letter of the English alphabet (case insensitive).
package pangram
is_pangram :: proc(str: string) -> bool {
// Defines Alphabet as a bit set type which has a bit
// for every character in the range 'a' up to (and including) 'z'
Alphabet :: bit_set['a' ..= 'z']
// The zero value of a bit set is empty (all bits unset)
expected: Alphabet
found: Alphabet
// An Alphabet literal with all bits set
expected = Alphabet {
'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l',
'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z',
}
// Alternatively, we can get the full set
// by doing a logical not operation on the empty set
expected = ~Alphabet{} // same result as prior assignment
UPPER_TO_LOWER_DIFF :: 'a' - 'A'
// For every rune in the string
// (a "rune" is an unsigned integer representing a Unicode code point)
for r in str {
if r >= 'a' && r <= 'z' { // if lowercase...
// Adding two bit sets returns their union,
// so this is effectively setting the bit for 'r' in 'found'
found += Alphabet{r}
} else if r >= 'A' && r <= 'Z' { // if uppercase...
found += Alphabet{r + UPPER_TO_LOWER_DIFF}
} else { // if not a letter...
continue
}
}
return found == expected
}
Here are some of the tests for this exercise:
package pangram
import "core:testing"
@(test)
/// description = empty sentence
test_empty_sentence :: proc(t: ^testing.T) {
// a test fails if expect() is passed false
testing.expect(t, !is_pangram(""))
}
@(test)
/// description = only lower case
test_only_lower_case :: proc(t: ^testing.T) {
testing.expect(t, is_pangram("the quick brown fox jumps over the lazy dog"))
}
@(test)
/// description = missing the letter 'x'
test_missing_the_letter_x :: proc(t: ^testing.T) {
testing.expect(t, !is_pangram("a quick movement of the enemy will jeopardize five gunboats"))
}
// etc....
Exercise: Reverse String
The reverse procedure takes a string and returns the string which is the reverse of its characters (or more accurately, its grapheme clusters).
package reverse_string
import "core:strings"
import "core:unicode/utf8"
reverse :: proc(str: string) -> string {
// A Grapheme is a cluster of one or more Unicode code points that
// represents what a user perceives as an individual character.
// (Not all Unicode code points represent individual,
// complete renderable characters.)
graphemes: [dynamic]utf8.Grapheme
// Returns allocated dynamic array of graphemes.
// Also returns the grapheme count, rune count, and monospacing width,
// but we discard these values by assigning them to _
graphemes, _, _, _ = utf8.decode_grapheme_clusters(str)
// We want to deallocate the dynamic array when leaving this procedure,
// so we call delete with a defer statement.
defer delete(graphemes)
sb := strings.builder_make()
// #reverse makes this loop iterate through the array backwards
// g = utf8.Grapheme
// i = int index
#reverse for g, i in graphemes {
data: string
if i == len(graphemes) - 1 {
// if last grapheme in the array...
data = str[g.byte_index:]
} else {
// Determine size of the current grapheme in bytes.
next := graphemes[i + 1]
num_bytes := next.byte_index - g.byte_index
// Slicing a string produces a new string.
// To get the bytes of the current grapheme, we slice
// twice to get the string starting at g.byte_index
// and having length num_bytes.
data = str[g.byte_index:][:num_bytes]
}
// Copy the bytes of the current grapheme to the string builder.
strings.write_string(&sb, data)
}
// Return the string builder's data as a regular string.
return strings.to_string(sb)
}
Exercise: Word Count
The procedure count_words takes a string and returns a map of words and their count of occurrences in the string.
- A word consists of adjacent ASCII letters, numerals, and apostraphes.
- Words are case insensitive.
Example words:
Hellowon'tfoo1123
package word_count
import "core:strings"
Word_Counts :: struct {
data: map[string]u32,
// The string from which all the keys are sliced.
// We need this when we free.
keys_str: string,
}
count_words :: proc(input: string) -> Word_Counts {
// to_lower() returns a newly allocated string
// We want a copy of the parameter string anyway because
// we want the Word_Counts to own its string keys.
// Also, we redeclare 'input' as a regular local variable
// because we cannot assign to a parameter and because we
// cannot use the address operator on a parameter.
// (This declaration effectively shadows the
// parameter for the rest of the scope.)
input := strings.to_lower(input)
word_counts := Word_Counts{ keys_str = input }
// A slice of the delimiter characters
// we'll use to split the string.
// (A slice literal points to content of a
// statically-allocated array.)
delims := []string{" ", ",", ".", "\n"}
// Most commonly, the in-clause of a for loop is an expression
// which returns a collection, enum, or string, and then iterates
// over the elements, named values, or runes. In these cases,
// the in-clause expression is evaluated just once.
// In this example, however, split_multi_iterate returns a
// string and boolean, and so the in-clause expression is evaluated
// before each iteration.
// Each iteration:
// 1. The returned string is assigned to str
// 2. If the returned bool is false, the next iteration is
// skipped and the loop ends.
for str in strings.split_multi_iterate(&input, delims) {
// trim() returns a string which is a slice of the original
word := strings.trim(str, "'\"()!&@$%^:")
// ignore empty words
if len(word) <= 0 {
continue
}
// If the map has not yet been allocated, this
// operation first allocates the map.
// If the key does not yet exist, it is created
// and initialized to 0 before the += operation.
word_counts.data[word] += 1
}
return word_counts
}
delete_word_counts :: proc(words: Word_Counts) {
delete(words.keys_str)
delete(words.data)
}
Exercise: Acronym
The procedure abbreviate converts a phrase to its acronym.
Hyphens are treated as word separators (like whitespace), but all other punctuation is ignored.
Examples:
| Input | Output |
|---|---|
| As Soon As Possible | ASAP |
| Liquid-crystal display | LCD |
| Automated Teller Machine | ATM |
package acronym
import "core:strings"
import "core:text/regex"
abbreviate :: proc(phrase: string) -> string {
// A backtick string ignores \ escape sequences.
pattern :: `[^ _-]+`
// Since we know the regex pattern is correct, we ignore
// the return error value.
iter, _ := regex.create_iterator(phrase, pattern)
defer regex.destroy_iterator(iter)
// We need a string builder to incrementally build
// the output string.
bsbffer := strings.builder_make()
defer strings.builder_destroy(&sb)
// Each iteration evaluates match_iterator(),
// and the loop ends when it returns false
// capture = the current match
// _ = discard of the index
for capture, _ in regex.match_iterator(&iter) {
first_letter := capture.groups[0][0]
strings.write_byte(&sb, first_letter)
}
// Note that to_string does not make a new allocation.
// Instead, it just returns a slice of the
// builder's internal buffer.
result := strings.to_string(sb)
// Freeing the string newly allocated by to_upper
// will be the caller's responsibility
return strings.to_upper(result)
}
Exercise: Anagram
The find_anagrams procedure takes a target word strings and a slice of candidate word strings. It returns a slice of the candidate words that are anagrams of the target.
For example, given the target "stone" and the candidate words "stone", "tones", "banana", "tons", "notes", and "Seton", the returned anagram words are "tones", "notes", and "Seton".
package anagram
import "core:slice"
import "core:strings"
import "core:unicode/utf8"
// Takes a target word and a list of candidate anagram words.
// Returns allocated slice of strings containing the candidate words
// that test postiive as anagrams of the target.
find_anagrams :: proc(word: string, candidates: []string) -> []string {
lc_word := strings.to_lower(word)
defer delete(lc_word)
letters := letters_in_order(lc_word)
defer delete(letters)
anagrams := make([dynamic]string, 0, len(candidates))
for candidate in candidates {
lc_candidate := strings.to_lower(candidate)
defer delete(lc_candidate)
// exact matches do not count as anagrams
if lc_word == lc_candidate {
continue
}
candidate_letters := letters_in_order(lc_candidate)
defer delete(candidate_letters)
if slice.equal(letters, candidate_letters) {
// if sorted letters of the target and candidate are equal...
append(&anagrams, candidate)
}
}
return anagrams[:]
}
// Returns allocated slice of runes containing
// the letters of the word in sorted order.
letters_in_order :: proc(word: string) -> []rune {
letters := utf8.string_to_runes(word)
slice.sort(letters)
return letters
}
Here are some tests from this exercise:
package anagram
import "core:fmt"
import "core:testing"
// When #caller_location is used as a default parameter value,
// the compiler inserts the number of the line of code
// where the call was made. Effectively here, errors logged by
// the expect_value call will use the line number of where
// expect_slices_match itself was called (rather than where
// expect_value is called inside expect_slices_match).
expect_slices_match :: proc(t: ^testing.T, actual, expected: []string, loc := #caller_location) {
result := fmt.aprintf("%s", actual)
exp_str := fmt.aprintf("%s", expected)
defer {
delete(result)
delete(exp_str)
}
testing.expect_value(t, result, exp_str, loc = loc)
}
@(test)
/// description = no matches
test_no_matches :: proc(t: ^testing.T) {
result := find_anagrams("diaper",
[]string{"hello", "world", "zombies", "pants"})
defer delete(result)
// An error logged in this call will cite the
// line number of this call.
expect_slices_match(t, result, []string{})
}
// etc...
Exercise: Flatten Array
The flatten procedure returns the list of integers from an ordered hierarchy.
package flatten_array
// This union is recursively defined as having
// variants i32 and slices of itself.
// An Item effectively represents an ordered tree of i32 values.
Item :: union {
i32,
[]Item,
}
// Returns the flattened list of all i32s within an Item.
flatten :: proc(input: Item) -> []i32 {
result := make([dynamic]i32)
// Instead of calling flatten recursively, we use a stack
// to track the nested Items as we encounter them.
// (While a proc recrusion solution might be a bit
// more familiar and simpler, creating our own stack
// instead is often more effecient.)
stack := make([dynamic]Item)
defer delete(stack)
// We start the loop with just the original item in the stack.
append(&stack, input)
for len(stack) > 0 {
// The builtin proc pop removes the last
// element of a dynamic array.
item := pop(&stack)
switch v in item {
case i32:
// Append the actual ints to the output array.
append(&result, v)
case []Item:
// Append the Items in reverse because
// the stack is consumed last-in-first-out.
#reverse for child in v {
append(&stack, child)
}
}
}
return result[:]
}
Exercise: Circular Buffer
The Ring_Buffer struct represents a ring buffer of ints.
package circular_buffer
import "base:runtime"
Ring_Buffer :: struct {
elements: []int,
// Number of slots that are currently occupied
size: int,
// Index of the first element
head: int,
// Generally, an allocated data structure should
// reference its allocator so it can be freed or reallocated.
allocator: runtime.Allocator,
}
Ring_Error :: enum {
None,
BufferEmpty,
BufferFull,
}
new_buffer :: proc(capacity: int,
// The allocator parameter has a default value
// and inferred type (runtime.Allocator)
allocator := context.allocator) -> Ring_Buffer {
return Ring_Buffer{
elements = make([]int, capacity, allocator),
allocator = allocator,
}
}
destroy_buffer :: proc(b: ^Ring_Buffer) {
delete(b.elements, b.allocator)
b.size = 0
b.head = 0
}
clear :: proc(b: ^Ring_Buffer) {
b.head = 0
b.size = 0
}
// Pop the head element of the buffer.
// Return .BufferEmpty if buffer is empty
read :: proc(b: ^Ring_Buffer) -> (int, Ring_Error) {
if b.size == 0 {
return 0, .BufferEmpty
}
value := b.elements[b.head]
// advance head (wrap if necessary)
b.head = (b.head + 1) % len(b.elements)
b.size -= 1
return value, .None
}
// Add an element to end of the buffer.
// Return .BufferFull if buffer is full
write :: proc(b: ^Ring_Buffer, value: int) -> Ring_Error {
if b.size == len(b.elements) {
return .BufferFull
}
index := (b.head + b.size) % len(b.elements)
b.elements[index] = value
b.size += 1
return .None
}
// Add an element to end of the buffer.
// If full, clobber current head and make
// the index after the new head
overwrite :: proc(b: ^Ring_Buffer, value: int) {
err := write(b, value)
if err == .BufferFull {
// if buffer was full...
// overwrite oldest value and move head
b.elements[b.head] = value
// advance head (wrap if necessary)
b.head = (b.head + 1) % len(b.elements)
}
}
Exercise: Linked List
This example defines a generic doubly-linked list type.
package linked_list
import "base:runtime"
List :: struct ($T: typeid) {
head: ^Node(T),
tail: ^Node(T),
allocator: runtime.Allocator
}
Node :: struct ($T: typeid) {
prev: ^Node(T),
next: ^Node(T),
value: T,
}
Error :: enum {
None,
Empty_List,
}
// Create a new list, optionally with initial elements.
// The .. on the last parameter indicates this proc can be
// called with 0 or more T arguments. The arguments are passed
// to the last parameter in a alice of T.
new_list :: proc($T: typeid, allocator := context.allocator, elements: ..T) -> List(T) {
list := List(T){ allocator = allocator }
for element, index in elements {
node := new(Node(T), allocator)
node.value = element
node.prev = list.tail
if index == 0 {
list.head = node
} else {
list.tail.next = node
}
list.tail = node
}
return list
}
// Deallocate the list
destroy_list :: proc(l: ^List($T)) {
for node := l.head; node != nil; node = node.next {
free(node, l.allocator)
}
}
// Insert a value at the head of the list.
unshift :: proc(l: ^List($T), value: T) {
node := new(Node(T), l.allocator)
node.value = value
node.next = l.head
if l.head != nil {
l.head.prev = node
}
l.head = node
if l.tail == nil {
l.tail = node
}
}
// Add a value to the tail of the list
push :: proc(l: ^List($T), value: T) {
node := new(Node(T), l.allocator)
node.value = value
node.prev = l.tail
if l.tail != nil {
l.tail.next = node
}
l.tail = node
if l.head == nil {
l.head = node
}
}
// Remove and return the value at the head of the list.
shift :: proc(l: ^List($T)) -> (T, Error) {
if l.head == nil {
return 0, .Empty_List
}
shifted_node := l.head
if l.head == l.tail {
l.head = nil
l.tail = nil
} else {
l.head.next.prev = nil
l.head = l.head.next
}
defer free(shifted_node, l.allocator)
return shifted_node.value, .None
}
// Remove and return the value at the tail of the list.
pop :: proc(l: ^List($T)) -> (T, Error) {
if l.head == nil {
return 0, .Empty_List
}
poped_node := l.tail
if l.head == l.tail {
l.head = nil
l.tail = nil
} else {
l.tail.prev.next = nil
l.tail = l.tail.prev
}
defer free(poped_node, l.allocator)
return poped_node.value, .None
}
// Reverse the elements in the list (in-place).
reverse :: proc(l: ^List($T)) {
// Start from the tail and move up to the head,
// while swapping the nodes 'prev' and 'next' pointers.
next_node := l.tail
for next_node != nil {
n := next_node
// Increment the node before we modify it so we don't mess
// up the loop.
next_node = next_node.prev
n.prev, n.next = n.next, n.prev
}
l.head, l.tail = l.tail, l.head
}
// Returns the number of elements in the list
count :: proc(l: List($T)) -> int {
n := 0
for node := l.head; node != nil; node = node.next {
n += 1
}
return n
}
// Remove the first element from the list which has the given value.
// List is unchanged if there is no matching value.
remove_first :: proc(l: ^List($T), value: T) {
for node := l.head; node != nil; node = node.next {
if node.value == value {
if node.prev != nil {
node.prev.next = node.next
} else {
l.head = node.next
}
if node.next != nil {
node.next.prev = node.prev
} else {
l.tail = node.prev
}
free(node, l.allocator)
return
}
}
}
Zig Intro - Code Examples (Ziglings)

This text is a supplement to a video that introduces the Zig programming language by walking through small code exercises from the Ziglings project.
This walkthrough assumes the audience has reasonable familiarity with C or other similar languages (e.g. C++, Rust, Odin, or Go). If you’re new to this kind of programming, it may help to first check out my Odin Introduction.
[!NOTE] Not all Ziglings exercises are included. Some exercises are skipped because they are redundant. Several others are skipped because they cover
async/await, a feature that is not yet available in the main Zig compiler.
The Ziglings exercises present broken code examples that need fixes to pass their tests, but here we present just completed solutions. Rather than focus on the particular problems being solved and the logic of their solutions, the video commentary and the code comments in this text focus just on the Zig language features introduced by each exercise.
[!WARNING] I strongly recommend working through the Ziglings exercises yourself at some point, say, one or two weeks after watching the video and reading this text.
003_assignment.zig
// A package namespace is a struct.
// This assigns the "std" package struct to 'std' in the
// current package struct.
const std = @import("std");
// Program entry point. Returns nothing.
pub fn main() void {
// local variable 'n' of type u8
var n: u8 = 50;
n = n + 5;
// local constant 'pi' of type u32
const pi: u32 = 314159;
// local constant 'negative_eleven' of type i8
const negative_eleven: i8 = -11;
// The 'std' package includes 'debug' package,
// and 'debug' package includes 'print' function.
// The .{} is an anonymous struct literal, here with three
// values assigned to indexes 0, 1, and 2 of the struct.
// Print's second parameter has type 'anytype'.
// Print uses introspection to access the indexes of the struct.
std.debug.print("{} {} {}\n", .{ n, pi, negative_eleven });
}
005_arrays2.zig
const std = @import("std");
// create alias in local package for member of imported package
const assert = std.debug.assert;
pub fn main() void {
// array of u8s
// The underscore indicates the array size is
// inferred from the number of elements.
const le = [_]u8{ 1, 3 };
const et = [_]u8{ 3, 7 };
// 1 3 3 7
// ++ concatenates the two [2]u8 arrays into a [4]u8 array
const leet = le ++ et;
assert(leet.len == 4);
// 1 0 0 1 1 0 0 1 1 0 0 1
// ** concatenates 3 instances of the array together
const bit_pattern = [_]u8{ 1, 0, 0, 1 } ** 3;
assert(bit_pattern.len == 12);
std.debug.print("LEET: ", .{});
// loop for each element of leet, assiging each element to n
for (leet) |n| {
std.debug.print("{}", .{n});
}
std.debug.print(", Bits: ", .{});
for (bit_pattern) |n| {
std.debug.print("{}", .{n});
}
std.debug.print("\n", .{});
}
006_strings.zig
const std = @import("std");
pub fn main() void {
const ziggy = "stardust";
const d: u8 = ziggy[4]; // the u8 at index 4 of the string
// concatenate 3 instances of the string together
const laugh = "ha " ** 3;
const major = "Major";
const tom = "Tom";
// concatenate strings 'major', " ", and 'tom'
const major_tom = major ++ " " ++ tom;
// {u} means print as unsigned
// {s} means print as string
std.debug.print("d={u} {s}{s}\n", .{ d, laugh, major_tom });
}
007_strings2.zig
const std = @import("std");
pub fn main() void {
// Multi-line string literals begin with \\ and run to end of line.
// Successive lines starting with \\ continue the string.
const lyrics =
\\Ziggy played guitar
\\Jamming good with Andrew Kelley
\\And the Spiders from Mars
;
std.debug.print("{s}\n", .{lyrics});
}
010_if2.zig
const std = @import("std");
pub fn main() void {
const discount = true;
// if-else used as an expression:
// if discount is true, evaluates to 17
// if discount is false, evaluates to 20
const price: u8 = if (discount) 17 else 20;
std.debug.print("With the discount, the price is ${}.\n", .{price});
}
012_while2.zig
const std = @import("std");
pub fn main() void {
var n: u32 = 2;
// The expression after : is evaluated after each iteration.
while (n < 1000) : (n *= 2) {
std.debug.print("{} ", .{n});
}
std.debug.print("n={}\n", .{n});
}
016_for2.zig
const std = @import("std");
pub fn main() void {
const bits = [_]u8{ 1, 0, 1, 1 };
var value: u32 = 0;
// A for loop can iterate over multiple collections or ranges in tandem.
// The lengths must match. (If the lengths are not knownable
// at compile time, the lengths are checked at runtime before
// the first iteration and trigger a panic if unequal.)
// Here, array 'bits' and range 0.. are iterated in tandem.
// (The upper bound of this range is unspecified, so it
// automatically matches the array length.)
for (bits, 0..) |bit, i| {
// convert the usize i to a u32 with builtin @intCast()
const i_u32: u32 = @intCast(i);
const place_value = std.math.pow(u32, 2, i_u32);
value += place_value * bit;
}
std.debug.print("The value of bits '1101': {}.\n", .{value});
}
021_errors.zig
// Defines MyNumberError to be an "error set" type.
// An error set is like an enum, but the members are
// given unique global ids.
const MyNumberError = error{
TooBig,
TooSmall,
TooFour,
};
const std = @import("std");
pub fn main() void {
const nums = [_]u8{ 2, 3, 4, 5, 6 };
for (nums) |n| {
std.debug.print("{}", .{n});
const number_error = numberFail(n);
if (number_error == MyNumberError.TooBig) {
std.debug.print(">4. ", .{});
}
if (number_error == MyNumberError.TooSmall) {
std.debug.print("<4. ", .{});
}
if (number_error == MyNumberError.TooFour) {
std.debug.print("=4. ", .{});
}
}
std.debug.print("\n", .{});
}
// returns a MyNumberError value
fn numberFail(n: u8) MyNumberError {
if (n > 4) return MyNumberError.TooBig;
if (n < 4) return MyNumberError.TooSmall;
return MyNumberError.TooFour;
}
022_errors2.zig
const std = @import("std");
const MyNumberError = error{
TooSmall
};
pub fn main() void {
// an "error union" type is two types joined by an !:
// SomeErrorSet ! SomePayloadType
// ...where the "payload" can be any type (including another error set).
// The variable's type here is an error
// union joining MyNumberError and u8,
// so this variable can be assigned any
// MyNumberError value or any u8 value.
var my_number: MyNumberError!u8 = 5;
my_number = MyNumberError.TooSmall;
std.debug.print("I compiled!\n", .{});
}
023_errors3.zig
const std = @import("std");
const MyNumberError = error{
TooSmall
};
pub fn main() void {
// If the call returns an error, the catch expression is evaluated
// and returned instead of the value returned by the call itself.
// Generally, a catch clause acts as a default for case of an error.
const a: u32 = addTwenty(44) catch 22; // 'a' assigned 64
const b: u32 = addTwenty(4) catch 22; // 'b' assigned 22
std.debug.print("a={}, b={}\n", .{ a, b });
}
// Returns either a MyNumberErorr or a u32
fn addTwenty(n: u32) MyNumberError!u32 {
if (n < 5) {
return MyNumberError.TooSmall;
} else {
return n + 20;
}
}
024_errors4.zig
const std = @import("std");
const MyNumberError = error{
TooSmall,
TooBig,
};
pub fn main() void {
const a: u32 = makeJustRight(44) catch 0;
const b: u32 = makeJustRight(14) catch 0;
const c: u32 = makeJustRight(4) catch 0;
std.debug.print("a={}, b={}, c={}\n", .{ a, b, c });
}
// (You can ignore the convoluted logic of this
// example. Just focus on the catch syntax.)
fn makeJustRight(n: u32) MyNumberError!u32 {
// If the fixTooBig call returns an error, catch clause is evaluated.
// The catch clause assigns the error to 'err' and executes
// its block (the curly braces after |err|).
return fixTooBig(n) catch |err| {
return err; // return from the containing function
};
}
fn fixTooBig(n: u32) MyNumberError!u32 {
return fixTooSmall(n) catch |err| {
if (err == MyNumberError.TooBig) {
return 20;
}
return err;
};
}
fn fixTooSmall(n: u32) MyNumberError!u32 {
return actualFix(n) catch |err| {
if (err == MyNumberError.TooSmall) {
return 10;
}
return err;
};
}
fn actualFix(n: u32) MyNumberError!u32 {
if (n < 10) return MyNumberError.TooSmall;
if (n > 20) return MyNumberError.TooBig;
return n;
}
025_errors5.zig
const std = @import("std");
const MyNumberError = error{
TooSmall,
TooBig,
};
pub fn main() void {
const a: u32 = addFive(44) catch 0;
const b: u32 = addFive(14) catch 0;
const c: u32 = addFive(4) catch 0;
std.debug.print("a={}, b={}, c={}\n", .{ a, b, c });
}
fn addFive(n: u32) MyNumberError!u32 {
// The 'try' is shorthand for:
// detect(n) catch |err| return err;
const x = try detect(n);
return x + 5;
}
fn detect(n: u32) MyNumberError!u32 {
if (n < 10) return MyNumberError.TooSmall;
if (n > 20) return MyNumberError.TooBig;
return n;
}
026_hello2.zig
const std = @import("std");
pub fn main(init: std.process.Init) !void {
// std.debug.print writes to standard error, not standard output!
const io = init.io;
// Get the standard output file.
var stdout_file = std.Io.File.stdout();
// Create a writer for standard output
var stdout_writer = stdout_file.writer(io, &.{});
const stdout = &stdout_writer.interface;
// Writing to standard output can fail with an error,
// so we use 'try':
try stdout.print("Hello world!\n", .{});
}
027_defer.zig
const std = @import("std");
pub fn main() void {
// defer the print call to end of the scope
// (in this case end of the function)
defer std.debug.print("Two\n", .{});
// Should print before the above.
std.debug.print("One ", .{});
}
029_errdefer.zig
const std = @import("std");
var counter: u32 = 0;
const MyErr = error{ GetFail, IncFail };
pub fn main() void {
// return if we fail to get a number
const a: u32 = makeNumber() catch return;
const b: u32 = makeNumber() catch return;
std.debug.print("Numbers: {}, {}\n", .{ a, b });
}
fn makeNumber() MyErr!u32 {
std.debug.print("Getting number...", .{});
// registers deferred print call,
// but only executes if function returns an error
errdefer std.debug.print("failed!\n", .{});
// These try calls may trigger returning an error.
var num = try getNumber();
num = try increaseNumber(num);
std.debug.print("got {}. ", .{num});
return num;
}
fn getNumber() MyErr!u32 {
return 4;
}
fn increaseNumber(n: u32) MyErr!u32 {
if (counter > 0) return MyErr.IncFail;
counter += 1;
return n + 1;
}
030_switch.zig
const std = @import("std");
pub fn main() void {
const lang_chars = [_]u8{ 26, 9, 7, 42 };
for (lang_chars) |c| {
// switch on value of 'c'
switch (c) {
1 => std.debug.print("A", .{}), // if 1
2 => std.debug.print("B", .{}), // if 2
3 => std.debug.print("C", .{}), // etc...
4 => std.debug.print("D", .{}),
5 => std.debug.print("E", .{}),
6 => std.debug.print("F", .{}),
7 => std.debug.print("G", .{}),
8 => std.debug.print("H", .{}),
9 => std.debug.print("I", .{}),
10 => std.debug.print("J", .{}),
// ... skip some letters
25 => std.debug.print("Y", .{}),
26 => std.debug.print("Z", .{}),
else => { // default case
std.debug.print("?", .{});
},
}
}
std.debug.print("\n", .{});
}
031_switch2.zig
const std = @import("std");
pub fn main() void {
const lang_chars = [_]u8{ 26, 9, 7, 42 };
for (lang_chars) |c| {
// switch as an expression:
const real_char: u8 = switch (c) {
1 => 'A', // evaluate to 'A'
2 => 'B', // evaluate to 'B'
3 => 'C', // etc...
4 => 'D',
5 => 'E',
6 => 'F',
7 => 'G',
8 => 'H',
9 => 'I',
10 => 'J',
// ...
25 => 'Y',
26 => 'Z',
else => '!',
};
std.debug.print("{c}", .{real_char});
}
std.debug.print("\n", .{});
}
032_unreachable.zig
const std = @import("std");
pub fn main() void {
const operations = [_]u8{ 1, 1, 1, 3, 2, 2 };
var current_value: u32 = 0;
for (operations) |op| {
switch (op) {
1 => {
current_value += 1;
},
2 => {
current_value -= 1;
},
3 => {
current_value *= current_value;
},
else => unreachable, // triggers a panic!
// (useful in development to crash and emit stack trace
// if an unexpected code path executes)
}
std.debug.print("{} ", .{current_value});
}
std.debug.print("\n", .{});
}
033_iferror.zig
const MyNumberError = error{
TooBig,
TooSmall,
};
const std = @import("std");
pub fn main() void {
const nums = [_]u8{ 2, 3, 4, 5, 6 };
for (nums) |num| {
std.debug.print("{}", .{num});
const n: MyNumberError!u8 = numberMaybeFail(num);
// Branches on error union value:
if (n) |value| {
// if n is the payload type (u8 in this case)
std.debug.print("={}. ", .{value});
} else |err| switch (err) {
// if n is the error set type (MyNumberError in this case)
MyNumberError.TooBig => std.debug.print(">4. ", .{}),
MyNumberError.TooSmall => std.debug.print("<4. ", .{}),
}
}
std.debug.print("\n", .{});
}
fn numberMaybeFail(n: u8) MyNumberError!u8 {
if (n > 4) return MyNumberError.TooBig;
if (n < 4) return MyNumberError.TooSmall;
return n;
}
035_enums.zig
const std = @import("std");
// Define type Ops as an enum with three values
// (Unlike in Odin, values of an enum with no specified integer type
// are not implicitly integers. Rather, they are just distinct names.)
const Ops = enum {
inc,
pow,
dec
};
pub fn main() void {
const operations = [_]Ops{
Ops.inc,
Ops.inc,
Ops.inc,
Ops.pow,
Ops.dec,
Ops.dec,
};
var current_value: u32 = 0;
for (operations) |op| {
// Switch on enum value
switch (op) {
Ops.inc => {
current_value += 1;
},
Ops.dec => {
current_value -= 1;
},
Ops.pow => {
current_value *= current_value;
},
// No "else" because already exhaustive
}
std.debug.print("{} ", .{current_value});
}
std.debug.print("\n", .{});
}
036_enums2.zig
const std = @import("std");
// Define enum type Color with three values.
// u32 is the backing "tag" type
const Color = enum(u32) {
red = 0xff0000,
green = 0x00ff00,
blue = 0x0000ff,
};
pub fn main() void {
// {x:0>6}
// ^
// x type ('x' is lower-case hexadecimal)
// : separator (needed for format syntax)
// 0 padding character (default is ' ')
// > alignment ('>' aligns right)
// 6 width (use padding to force width)
std.debug.print(
\\<p>
\\ <span style="color: #{x:0>6}">Red</span>
\\ <span style="color: #{x:0>6}">Green</span>
\\ <span style="color: #{x:0>6}">Blue</span>
\\</p>
\\
, .{
@intFromEnum(Color.red), // convert from Color to u32
@intFromEnum(Color.green),
@intFromEnum(Color.blue),
});
}
037_structs.zig
const std = @import("std");
const Role = enum {
wizard,
thief,
bard,
warrior,
};
// Define a struct Character with four fields
const Character = struct {
role: Role, // field 'role' of type Role
gold: u32, // field 'gold' of type u32
experience: u32, // etc...
health: u8,
};
pub fn main() void {
// A Character struct literal with specified values for all four fields
var glorp_the_wise = Character{
.role = Role.wizard,
.gold = 20,
.experience = 10,
.health = 100,
};
glorp_the_wise.gold += 5;
glorp_the_wise.health -= 10;
std.debug.print("Your wizard has {} health and {} gold.\n", .{
glorp_the_wise.health,
glorp_the_wise.gold,
});
}
039_pointers.zig
const std = @import("std");
pub fn main() void {
var num1: u8 = 5;
// *u8 = pointer to u8
// The & operator here gets a *u8 from the u8 variable.
const num1_pointer: *u8 = &num1;
var num2: u8 = 6;
num2 = num1_pointer.*; // dereference the pointer
// num1_pointer is const, but the pointer itself is not,
// so we can assign to its dereference
num1_pointer.* = 9;
std.debug.print("num1: {}, num2: {}\n", .{ num1, num2 });
}
040_pointers2.zig
const std = @import("std");
pub fn main() void {
var a: u8 = 0;
// 'b' is a const, so cannot assign a different pointer value to 'b',
// and the type is 'pointer-to-const-u8'
// (&a returns a pointer-to-u8, which is
// implicitly cast to pointer-to-const-u8)
const b: *const u8 = &a;
// cannot assign to deref of a pointer-to-const
// b.* = 7;
// Note that a const pointer does NOT guarantee that
// the referenced data is immutable!
a = 12;
// OK to deref pointer-to-const to read value
std.debug.print("a: {}, b: {}\n", .{ a, b.* });
}
045_optionals.zig
const std = @import("std");
pub fn main() void {
const result = deepThought();
// 'orelse' evaluates and returns its
// right operand if left operand is null
const answer: u8 = result orelse 42;
std.debug.print("The Ultimate Answer: {}.\n", .{answer});
}
// Returns either a u8 or null
// (u8 is not a pointer type, but it's still nullable!)
fn deepThought() ?u8 {
return null;
}
046_optionals2_.zig
const std = @import("std");
const Elephant = struct {
letter: u8,
// 'tail' is a pointer-to-Elephant or null
// (a non-? pointer cannot be null)
tail: ?*Elephant = null,
visited: bool = false,
};
pub fn main() void {
var elephantA = Elephant{ .letter = 'A' };
var elephantB = Elephant{ .letter = 'B' };
var elephantC = Elephant{ .letter = 'C' };
linkElephants(&elephantA, &elephantB);
linkElephants(&elephantB, &elephantC);
visitElephants(&elephantA);
std.debug.print("\n", .{});
}
fn linkElephants(e1: ?*Elephant, e2: ?*Elephant) void {
// panic if e1 or e2 is null, otherwise assign e2 to e1.tail
(e1 orelse unreachable).tail = e2 orelse unreachable;
// shorthand for prior line
e1.?.tail = e2.?;
}
fn visitElephants(first_elephant: *Elephant) void {
var e = first_elephant;
while (!e.visited) {
std.debug.print("Elephant {u}. ", .{e.letter});
e.visited = true;
// break if .tail is null
e = e.tail orelse break;
}
}
047_methods.zig
const std = @import("std");
const Alien = struct {
health: u8,
// .hatch belongs to namespace of Alien
pub fn hatch(strength: u8) Alien {
return Alien{
.health = strength * 5,
};
}
};
const HeatRay = struct {
damage: u8,
// .zp belongs to namespace of HeatRay
pub fn zap(self: HeatRay, alien: *Alien) void {
alien.health -= if (self.damage >= alien.health)
alien.health else self.damage;
}
};
pub fn main() void {
var aliens = [_]Alien{
// invoke .hatch like a Java static method
Alien.hatch(2),
Alien.hatch(1),
Alien.hatch(3),
Alien.hatch(3),
Alien.hatch(5),
Alien.hatch(3),
};
var n_aliens_alive = aliens.len;
const heat_ray = HeatRay{ .damage = 7 };
while (n_aliens_alive > 0) {
n_aliens_alive = 0;
// loop through every alien by pointer
// (both & and * required)
for (&aliens) |*alien| {
HeatRay.zap(heat_ray, alien);
// heat_ray.zap(alien); // shorthand for prior line
if (alien.health > 0) {
n_aliens_alive += 1;
}
}
std.debug.print("{} aliens. ", .{n_aliens_alive});
}
std.debug.print("Earth is saved!\n", .{});
}
050_no_value.zig
const std = @import("std");
const Err = error{
Cthulhu
};
pub fn main() void {
// A pointer-to-const-[16]u8
// Starts undefined (i.e. explicitly uninitialized)
var first_line1: *const [16]u8 = undefined;
// String literal of 16 characters coerced to *const [16]u8
first_line1 = "That is not dead";
// An error union of Err and pointer-to-const-[21]u8
var first_line2: Err!*const [21]u8 = Err.Cthulhu;
// String literal of 21 characters coerced to *const [21]u8
first_line2 = "which can eternal lie";
// Need "{!s}" format for the error union string.
std.debug.print("{s} {!s} / ", .{ first_line1, first_line2 });
printSecondLine();
}
fn printSecondLine() void {
// Nullable-pointer-to-const-[18]u8
var second_line2: ?*const [18]u8 = null;
// String literal of 18 characters coerced to *const [18]u8
second_line2 = "even death may die";
std.debug.print("And with strange aeons {s}.\n", .{second_line2.?});
}
051_values.zig
const std = @import("std");
const Character = struct {
gold: u32 = 0,
health: u8 = 100,
experience: u32 = 0,
};
// global Character constant
const the_narrator = Character{
.gold = 12,
.health = 99,
.experience = 9000,
};
// global Character variable
var global_wizard = Character{};
pub fn main() void {
var glorp = Character{
.gold = 30,
};
const reward_xp: u32 = 200;
// local alias of imported function
const print = std.debug.print;
var glorp_access1: Character = glorp;
glorp_access1.gold = 111;
print("1:{}!. ", .{glorp.gold == glorp_access1.gold});
var glorp_access2: *Character = &glorp;
glorp_access2.gold = 222;
print("2:{}!. ", .{glorp.gold == glorp_access2.gold});
const glorp_access3: *Character = &glorp;
glorp_access3.gold = 333;
print("3:{}!. ", .{glorp.gold == glorp_access3.gold});
print("XP before:{}, ", .{glorp.experience});
levelUp(&glorp, reward_xp);
print("after:{}.\n", .{glorp.experience});
}
fn levelUp(character: *Character, xp: u32) void {
character.experience += xp;
}
052_slices.zig
const std = @import("std");
pub fn main() void {
var cards = [8]u8{ 'A', '4', 'K', '8', '5', '2', 'Q', 'J' };
// constants 'hand1' and 'hand2' are slices of u8
// [0..4] gets slice of array from index 0 up to (but not including) 4
const hand1: []u8 = cards[0..4];
// [4..] gets slice of array from index 4 up through end of the array
const hand2: []u8 = cards[4..];
std.debug.print("Hand1: ", .{});
printHand(hand1);
std.debug.print("Hand2: ", .{});
printHand(hand2);
}
fn printHand(hand: []u8) void {
for (hand) |h| {
std.debug.print("{u} ", .{h});
}
std.debug.print("\n", .{});
}
053_slices2.zig
const std = @import("std");
pub fn main() void {
const scrambled = "great base for all your justice are belong to us";
// these are slices of const u8
// (slicing a string returns a slice of constants)
const base1: []const u8 = scrambled[15..23];
const base2: []const u8 = scrambled[6..10];
const base3: []const u8 = scrambled[32..];
printPhrase(base1, base2, base3);
const justice1: []const u8 = scrambled[11..14];
const justice2: []const u8 = scrambled[0..5];
const justice3: []const u8 = scrambled[24..31];
printPhrase(justice1, justice2, justice3);
std.debug.print("\n", .{});
}
fn printPhrase(part1: []const u8, part2: []const u8, part3: []const u8) void {
std.debug.print("'{s} {s} {s}.' ", .{ part1, part2, part3 });
}
054_manypointers.zig
const std = @import("std");
pub fn main() void {
const s = "ABCDEFG";
// Coerce to a pointer-to-const-array
// (s.len is a compile time expression, so valid for the array size)
const ptr: *const [s.len]u8 = s;
// Coerce to a slice
var slice: []const u8 = s;
// A "many"-pointer-to-const-u8
const manyptr: [*]const u8 = ptr;
// Index the many-item pointer like an array
const char: u8 = manyptr[5]; // 'F'
// Get slice from a many pointer
// Range is from 0 up to (but not including) ptr.len
slice = manyptr[0..ptr.len];
std.debug.print("{s} {c}\n", .{ slice, char });
}
055_unions.zig
const std = @import("std");
// A Zig union is like a struct where the fields overlap in memory,
// so assigning to .ant clobbers .bee and vice versa
const Insect = union {
ant: Ant,
bee: Bee,
};
const Ant = struct {
still_alive: bool,
};
const Bee = struct {
flowers_visited: u16,
};
// Unlike Odin enums, a Zig enum by default does not include a tag,
// so we are separately creating this enum to discriminate the union
const Species = enum {
ant,
bee,
};
pub fn main() void {
const ant = Ant{ .still_alive = true };
const bee = Bee{ .flowers_visited = 15 };
// A union literal looks basically like a struct lieral
var insect = Insect{ .ant = ant };
printInsect(insect, Species.ant);
insect = Insect{ .bee = bee };
printInsect(insect, Species.bee);
}
fn printInsect(insect: Insect, species: Species) void {
// switch on the enum value
switch (species) {
.ant => std.debug.print("Ant alive is: {}. \n",
.{insect.ant.still_alive}),
.bee => std.debug.print("Bee visited {} flowers. \n",
.{insect.bee.flowers_visited}),
}
}
056_unions2.zig
const std = @import("std");
// This Insect union is tagged with the Species enum
const Insect = union(Species) {
ant: Ant,
bee: Bee,
};
const Ant = struct {
still_alive: bool,
};
const Bee = struct {
flowers_visited: u16,
};
const Species = enum {
ant,
bee,
};
pub fn main() void {
const ant = Ant{ .still_alive = true };
const bee = Bee{ .flowers_visited = 16 };
// Insect with .ant value has the ant Species tag
var insect = Insect{ .ant = ant };
printInsect(insect);
// Insect with .bee value has the bee Species tag
insect = Insect{ .bee = bee };
printInsect(insect);
}
fn printInsect(insect: Insect) void {
// switch on enum tag of the Insect union value
switch (insect) {
.ant => |a| std.debug.print("Ant alive is: {}. \n", .{a}),
.bee => |b| std.debug.print("Bee visited {} flowers. \n", .{b}),
}
}
057_unions3.zig
const std = @import("std");
// union of 'enum' means a tag enum is implicitly defined
const Insect = union(enum) {
ant: Ant,
bee: Bee,
};
const Ant = struct {
still_alive: bool,
};
const Bee = struct {
flowers_visited: u16,
};
pub fn main() void {
const ant = Ant{ .still_alive = true };
const bee = Bee{ .flowers_visited = 16 };
var insect = Insect{ .ant = ant };
printInsect(insect);
insect = Insect{ .bee = bee };
printInsect(insect);
}
fn printInsect(insect: Insect) void {
switch (insect) {
// the tag names are same as the union fields
.ant => |a| std.debug.print("Ant alive is: {}. \n", .{a}),
.bee => |b| std.debug.print("Bee visited {} flowers. \n", .{b}),
}
}
060_floats.zig
const print = @import("std").debug.print;
pub fn main() void {
// e in number literal indicates scientific notation
const shuttle_weight: f32 = 0.453592 * 4480e3;
// d = decimal
// .0 = precision
print("Shuttle liftoff weight: {d:.0} metric tons\n",
.{shuttle_weight / 1e3});
}
061_coercions.zig
// 1. Types can always be made _more_ restrictive.
//
// var foo: u8 = 5;
// var p1: *u8 = &foo;
// var p2: *const u8 = p1; // mutable to immutable
//
// 2. Numeric types can coerce to _larger_ types.
//
// var n1: u8 = 5;
// var n2: u16 = n1; // integer "widening"
//
// var n3: f16 = 42.0;
// var n4: f32 = n3; // float "widening"
//
// 3. Single-item pointers to arrays coerce to slices and
// many-item pointers.
//
// const arr: [3]u8 = [3]u8{5, 6, 7};
// const s: []const u8 = &arr; // to slice
// const p: [*]const u8 = &arr; // to many-item pointer
//
// 4. Single-item mutable pointers can coerce to single-item
// pointers pointing to an array of length 1.
//
// 5. Payload types and null coerce to optionals (the ? types).
//
// 6. Payload types and errors coerce to error unions.
//
// const MyError = error{Argh};
// var char: u8 = 'x';
// var char_or_die: MyError!u8 = char; // payload type
// char_or_die = MyError.Argh; // error
//
// 7. 'undefined' coerces to any type (or it wouldn't work!)
//
// 8. Compile-time numbers coerce to compatible types.
//
// Just about every single exercise program has had an example
// of this, but a full and proper explanation is coming your
// way soon in the third-eye-opening subject of comptime.
//
// 9. Tagged unions coerce to the current tagged enum.
//
// 10. Enums coerce to a tagged union when that tagged field is a
// zero-length type that has only one value (like void).
//
// 11. Zero-bit types (like void) can be coerced into single-item
// pointers.
//
const print = @import("std").debug.print;
pub fn main() void {
var letter: u8 = 'A';
// rules 4 and 5 apply here
const my_letter: ?*[1]u8 = &letter;
print("Letter: {u}\n", .{my_letter.?.*[0]});
}
062_loop_expressions.zig
const print = @import("std").debug.print;
pub fn main() void {
const langs: [6][]const u8 = .{
"Erlang",
"Algol",
"C",
"OCaml",
"Zig",
"Prolog",
};
// for loop used as an expression:
// evaluates into slice of the values from each break
const current_lang: ?[]const u8 = for (langs) |lang| {
if (lang.len == 3) {
break lang; // returns lang for this iteration
}
} else null; // evaluates to null if loop never entered
if (current_lang) |cl| {
print("Current language: {s}\n", .{cl});
} else {
print("Did not find a three-letter language name. :-(\n", .{});
}
}
063_labels.zig
const print = @import("std").debug.print;
const ingredients = 4;
const foods = 4;
const Food = struct {
name: []const u8,
requires: [ingredients]bool,
};
// Chili Macaroni Tomato Sauce Cheese
// ------------------------------------------------------
// Mac & Cheese x x
// Chili Mac x x
// Pasta x x
// Cheesy Chili x x
// ------------------------------------------------------
const menu: [foods]Food = [_]Food{
Food{
.name = "Mac & Cheese",
.requires = [ingredients]bool{ false, true, false, true },
},
Food{
.name = "Chili Mac",
.requires = [ingredients]bool{ true, true, false, false },
},
Food{
.name = "Pasta",
.requires = [ingredients]bool{ false, true, true, false },
},
Food{
.name = "Cheesy Chili",
.requires = [ingredients]bool{ true, false, false, true },
},
};
pub fn main() void {
const wanted_ingredients = [_]u8{ 0, 3 }; // Chili, Cheese
// outer loop has label :food_loop
const meal = food_loop: for (menu) |food| {
for (food.requires, 0..) |required, required_ingredient| {
if (!required) {
continue; // continue innermost loop
}
const found = for (wanted_ingredients) |want_it| {
if (required_ingredient == want_it) {
// return true for iteration of innermost loop
break true;
}
} else false;
if (!found) {
// continue outer loop
continue :food_loop;
}
}
// return food for iteration of outer loop
break food;
} else undefined;
// a loop expression must have an else, but this
// should be unreachable, so we just use undefined
print("Enjoy your {s}!\n", .{meal.name});
}
064_builtins.zig
const print = @import("std").debug.print;
pub fn main() void {
// @addWithOverflow(a: anytype, b: anytype)
// struct { @TypeOf(a, b), u1 }
//
// - 'a' and 'b' are numbers of anytype.
// - The return value is a tuple with the result
// and a possible overflow bit.
//
const a: u4 = 0b1101;
const b: u4 = 0b0101;
const my_result = @addWithOverflow(a, b);
// Check out our fancy formatting! b:0>4 means, "print
// as a binary number, zero-pad right-aligned four digits."
// The print() below will produce: "1101 + 0101 = 0010 (true)".
print("{b:0>4} + {b:0>4} = {b:0>4} ({s})\n", .{ a, b, my_result[0], if (my_result[1] == 1) "true" else "false" });
const expected_result: u8 = 0b10010;
print("Without overflow: {b:0>8}.\n", .{expected_result});
// @bitReverse(integer: anytype) T
//
// * 'integer' is the value to reverse.
// * The return value will be the same type with the
// value's bits reversed
//
const input: u8 = 0b11110000;
const tupni: u8 = @bitReverse(input);
print("Furthermore, {b:0>8} backwards is {b:0>8}.\n", .{ input, tupni });
}
065_builtins2.zig
const print = @import("std").debug.print;
const Narcissus = struct {
// Default values
// (fields with default values can be
// omitted in literals of the struct)
me: *Narcissus = undefined,
myself: *Narcissus = undefined,
echo: void = undefined,
fn fetchTheMostBeautifulType() type {
// Returns the type of the containing struct
// (in this case Narcissus)
//
// Note: by convention, the name of a builtin that
// returns a type starts with uppercase letter
return @This();
}
};
fn typeToString(myType: type) []const u8 {
// Import assigned to a local constant
const indexOf = @import("std").mem.indexOf;
// Gets type name as string
const name = @typeName(myType);
// Turn "065_builtins2.Narcissus" into "Narcissus"
return name[indexOf(u8, name, ".").? + 1 ..];
}
pub fn main() void {
var narcissus: Narcissus = Narcissus{};
narcissus.me = &narcissus;
narcissus.myself = &narcissus;
// Get type
const Type1: type = @TypeOf(narcissus, narcissus.me.*,
narcissus.myself.*);
const Type2: type = Narcissus.fetchTheMostBeautifulType();
print("A {s} loves all {s}es. \n", .{
typeToString(Type1),
typeToString(Type2),
});
// @"foo" is required syntax for identifiers that match a reserved word
const fields = @typeInfo(Narcissus).@"struct".fields;
// 'fields' is a slice of StructField, which is defined as:
//
// pub const StructField = struct {
// name: [:0]const u8,
// type: type,
// default_value_ptr: ?*const anyopaque,
// is_comptime: bool,
// alignment: comptime_int,
//
// defaultValue() ?sf.type // Function that loads the
// // field's default value from
// // `default_value_ptr`
// };
//
const field0 = if (fields[0].type != void) fields[0].name else " ";
const field1 = if (fields[1].type != void) fields[1].name else " ";
const field2 = if (fields[2].type != void) fields[2].name else " ";
print("He has room in his heart for: {s} {s} {s}",
.{ field0, field1, field2 });
print(".\n", .{});
}
066_comptime.zig
const print = @import("std").debug.print;
pub fn main() void {
// Unique types exist for constant number literals
// (the types could be left implicit here)
const const_int: comptime_int = 12345;
const const_float: comptime_float = 987.654;
print("Immutable: {}, {d:.3}; ", .{ const_int, const_float });
// Literals coerced from comptime_int / _float
var var_int: u32 = 12345;
var var_float: f32 = 987.654;
var_int = 54321;
var_float = 456.789;
print("Mutable: {}, {d:.3}; ", .{ var_int, var_float });
print("Types: {}, {}, {}, {}\n", .{
@TypeOf(const_int),
@TypeOf(const_float),
@TypeOf(var_int),
@TypeOf(var_float),
});
}
067_comptime2.zig
const print = @import("std").debug.print;
// When the compiler processes a statement, it asks two questions:
//
// 1. Should I run this now? (Is it comptime?)
// 2. Should I emit generated code for this? (Is it runtime?)
//
// For some statements it does both.
pub fn main() void {
// This variable is only *variable* at comptime,
// however its current value can be baked into
// runtime code as a constant.
comptime var count = 0;
count += 1; // constant count is now 1
const a1: [count]u8 = .{'A'} ** count;
count += 1; // constant count is now 2
const a2: [count]u8 = .{'B'} ** count;
count += 1; // constant count is now 3
const a3: [count]u8 = .{'C'} ** count;
count += 1; // constant count is now 4
const a4: [count]u8 = .{'D'} ** count;
print("{s} {s} {s} {s}\n", .{ a1, a2, a3, a4 });
}
068_comptime3.zig
const print = @import("std").debug.print;
const Schooner = struct {
name: []const u8,
scale: u32 = 1,
hull_length: u32 = 143,
bowsprit_length: u32 = 34,
mainmast_height: u32 = 95,
// Parameter 'scale' requires a compile time value argument
// The function is compiled once for each unique
// value passed to 'scale'
fn scaleMe(self: *Schooner, comptime scale: u32) void {
const my_scale = if (scale == 0) 1 else scale;
self.scale = my_scale;
self.hull_length /= my_scale;
self.bowsprit_length /= my_scale;
self.mainmast_height /= my_scale;
}
fn printMe(self: Schooner) void {
print("{s} (1:{}, {} x {})\n", .{
self.name,
self.scale,
self.hull_length,
self.mainmast_height,
});
}
};
pub fn main() void {
var whale = Schooner{ .name = "Whale" };
var shark = Schooner{ .name = "Shark" };
var minnow = Schooner{ .name = "Minnow" };
// variable only at comptime
comptime var scale: u32 = undefined;
scale = 32; // 1:32 scale
// pass constant value 32
minnow.scaleMe(scale);
minnow.printMe();
scale -= 16; // 1:16 scale
// pass constant value 16
shark.scaleMe(scale);
shark.printMe();
scale -= 16; // 0
// pass constant value 0
whale.scaleMe(scale);
whale.printMe();
}
069_comptime4.zig
const print = @import("std").debug.print;
pub fn main() void {
const s1 = makeSequence(u8, 3); // creates a [3]u8
const s2 = makeSequence(u32, 5); // creates a [5]u32
const s3 = makeSequence(i64, 7); // creates a [7]i64
print("s1={any}, s2={any}, s3={any}\n", .{ s1, s2, s3 });
}
// First parameter takes a comptime type value
// Second parameter takes a comptime usize value
fn makeSequence(comptime T: type, comptime size: usize) [size]T {
var arr: [size]T = undefined;
var i: usize = 0;
while (i < size) : (i += 1) {
// This @as coerces the second arg to T
arr[i] = @as(T, @intCast(i)) + 1;
}
return arr;
}
070_comptime5.zig
const print = @import("std").debug.print;
const Duck = struct {
eggs: u8,
loudness: u8,
location_x: i32 = 0,
location_y: i32 = 0,
fn waddle(self: *Duck, x: i16, y: i16) void {
self.location_x += x;
self.location_y += y;
}
fn quack(self: Duck) void {
if (self.loudness < 4) {
print("\"Quack.\" ", .{});
} else {
print("\"QUACK!\" ", .{});
}
}
};
const RubberDuck = struct {
in_bath: bool = false,
location_x: i32 = 0,
location_y: i32 = 0,
fn waddle(self: *RubberDuck, x: i16, y: i16) void {
self.location_x += x;
self.location_y += y;
}
fn quack(self: RubberDuck) void {
// required because Zig demands that every
// parameter gets used in some way
_ = self;
print("\"Squeek!\" ", .{});
}
fn listen(self: RubberDuck, dev_talk: []const u8) void {
_ = dev_talk;
self.quack();
}
};
const Duct = struct {
diameter: u32,
length: u32,
galvanized: bool,
connection: ?*Duct = null,
// Returns !void, meaning any kind of error or void (nothing)
fn connect(self: *Duct, other: *Duct) !void {
if (self.diameter == other.diameter) {
self.connection = other;
} else {
return DuctError.UnmatchedDiameters;
}
}
};
const DuctError = error{UnmatchedDiameters};
pub fn main() void {
const duck = Duck{
.eggs = 0,
.loudness = 3,
};
const rubber_duck = RubberDuck{
.in_bath = false,
};
const duct = Duct{
.diameter = 17,
.length = 165,
.galvanized = true,
};
print("duck: {} \n", .{isADuck(duck)});
print("rubber_duck: {} \n", .{isADuck(rubber_duck)});
print("duct: {}\n", .{isADuck(duct)}); // false
}
// An anytype parameter takes an argument of any type.
// Function is compiled once for each unique
// type of value passed to 'possible_duck'.
fn isADuck(possible_duck: anytype) bool {
const Type = @TypeOf(possible_duck);
const walks_like_duck = @hasDecl(Type, "waddle");
const quacks_like_duck = @hasDecl(Type, "quack");
const is_duck = walks_like_duck and quacks_like_duck;
// Condition evaluated at compile time because
// both values are constant.
// The body is only included in the
// runtime code if condition was true.
if (walks_like_duck and quacks_like_duck) {
possible_duck.quack();
}
return is_duck;
}
071_comptime6.zig
const print = @import("std").debug.print;
const Narcissus = struct {
me: *Narcissus = undefined,
myself: *Narcissus = undefined,
echo: void = undefined,
};
pub fn main() void {
print("Narcissus has room in his heart for:", .{});
// @typeInfo runs at comptime, so this is a comptime const
// (meaning its value is fixed at compile time)
const fields = @typeInfo(Narcissus).@"struct".fields;
// An 'inline for' unrolls the loop at compile time
// (meaning the body is repeated for each iteration)
// Inline allowed here because fields is comptime
inline for (fields) |field| {
// This condition is evaluable at comptime,
// so the if body is only included in runtime
// when the condition is true.
if (field.type != void) {
print(" {s}", .{field.name});
}
}
print(".\n", .{});
}
072_comptime7.zig
const print = @import("std").debug.print;
pub fn main() void {
const instructions = "+3 *5 -2 *2";
var value: u32 = 0;
comptime var i = 0;
// Loop unrolled at compile time
// (note the header expressions are evaluable at comptime)
inline while (i < instructions.len) : (i += 3) {
const digit = instructions[i + 1] - '0';
// This switch is evaluable at comptime,
// so only one case baked into runtime
// of each loop iteration.
switch (instructions[i]) {
'+' => value += digit,
'-' => value -= digit,
'*' => value *= digit,
else => unreachable,
}
}
print("{}\n", .{value});
}
073_comptime8.zig
const print = @import("std").debug.print;
const llamas = [5]u32{ 5, 10, 15, 20, 25 };
pub fn main() void {
const my_llama = getLlama(4);
print("My llama value is {}.\n", .{my_llama});
}
fn getLlama(comptime i: usize) u32 {
// Execute expression at comptime
// (allowed when for expressions that
// include only comptime values)
comptime assert(i < llamas.len);
return llamas[i];
}
fn assert(ok: bool) void {
if (!ok) unreachable;
}
074_comptime9.zig
// File scope (outside of any function) is implicitly comptime
const print = @import("std").debug.print;
const llamas = makeLlamas(5); // call is impliticly comptime
fn makeLlamas(comptime count: usize) [count]u8 {
var temp: [count]u8 = undefined;
var i = 0;
while (i < count) : (i += 1) {
temp[i] = i;
}
return temp;
}
pub fn main() void {
print("My llama value is {}.\n", .{llamas[2]});
}
076_sentinels.zig
const print = @import("std").debug.print;
const sentinel = @import("std").meta.sentinel;
pub fn main() void {
// "sentinal-terminated array" of u32 values
// (0 is the sentinel)
// This array has 7 u32 values, and the last value is 0.
// Its .len is 6, so last valid index is 5.
var nums = [_:0]u32{ 1, 2, 3, 4, 5, 6 };
// "sentinal-terminated many-item pointer" of u32 values
// (0 is the sentinel)
// If ptr's type used a terminator other than 0,
// this coercion would be illegal.
const ptr: [*:0]u32 = &nums;
nums[3] = 0;
printSequence(nums);
printSequence(ptr);
}
fn printSequence(my_seq: anytype) void {
const my_typeinfo = @typeInfo(@TypeOf(my_seq));
switch (my_typeinfo) {
.array => {
print("Array:", .{});
for (my_seq) |s| {
print("{}", .{s});
}
},
.pointer => {
// if a pointer...
// The sentinel function from the meta package
// returns the sentinal value of the type (in this case 0)
const my_sentinel = sentinel(@TypeOf(my_seq));
print("Many-item pointer:", .{});
var i: usize = 0;
while (my_seq[i] != my_sentinel) {
print("{}", .{my_seq[i]});
i += 1;
}
},
else => unreachable,
}
print("\n", .{});
}
077_sentinels2.zig
// Zig strings are compatible with C strings (which
// are null-terminated) AND can be coerced to a variety of other
// Zig types:
//
// const a: [5]u8 = "array".*;
// const b: *const [16]u8 = "pointer to array";
// const c: []const u8 = "slice";
// const d: [:0]const u8 = "slice with sentinel";
// const e: [*:0]const u8 = "many-item pointer with sentinel";
// const f: [*]const u8 = "many-item pointer";
//
//
const print = @import("std").debug.print;
const WeirdContainer = struct {
data: [*]const u8,
length: usize,
};
pub fn main() void {
const foo = WeirdContainer{
// A string literal is a "constant pointer to a
// zero-terminated (null-terminated) fixed-size array of u8"
// Here the literal is coerced to [*]const u8
.data = "Weird Data!",
.length = 11,
};
const printable = foo.data[0..foo.length];
print("{s}\n", .{printable});
}
078_sentinels3.zig
const print = @import("std").debug.print;
pub fn main() void {
const data: [*]const u8 = "Weird Data!";
// @ptrCast returns type inferred from context
// (here the assignment target)
const printable: [*:0]const u8 = @ptrCast(data);
print("{s}\n", .{printable});
}
079_quoted_identifiers.zig
const print = @import("std").debug.print;
pub fn main() void {
// The @"foo" syntax is a quoted identifier.
// Allows for identifier names that otherwise would be illegal.
const @"55_cows": i32 = 55;
const @"isn't true": bool = false;
print("Sweet freedom: {}, {}.\n", .{
@"55_cows",
@"isn't true",
});
}
080_anonymous_structs.zig
const print = @import("std").debug.print;
// This function returns a generated struct type
fn Circle(comptime T: type) type {
// An anonymous struct *type* (not a literal)
return struct {
center_x: T,
center_y: T,
radius: T,
};
}
pub fn main() void {
// Define circle1 to be a struct type where T is i32
const circle1 = Circle(i32){
.center_x = 25,
.center_y = 70,
.radius = 15,
};
// Define circle1 to be a struct type where T is f32
const circle2 = Circle(f32){
.center_x = 25.234,
.center_y = 70.999,
.radius = 15.714,
};
print("[{s}: {},{},{}] \n", .{
@typeName(@TypeOf(circle1)),
circle1.center_x,
circle1.center_y,
circle1.radius,
});
print("[{s}: {d:.1},{d:.1},{d:.1}]\n", .{
@typeName(@TypeOf(circle2)),
circle2.center_x,
circle2.center_y,
circle2.radius,
});
}
081_anonymous_structs2.zig
const print = @import("std").debug.print;
pub fn main() void {
// Anonymous struct literals can have any
// combination of field names and values
printCircle(.{
.center_x = @as(u32, 205),
.center_y = @as(u32, 187),
.radius = @as(u32, 12),
});
printCircle(.{
.center_x = @as(f32, 205),
.center_y = @as(f32, 187),
.radius = @as(u8, 12),
.something = 5, // printCircle won't use this field, but that's OK
});
}
// Accepts any struct with fields .center_x, .center_y, .radius
// having types that can be printed as {} in the format string
fn printCircle(circle: anytype) void {
print("x:{} y:{} radius:{}\n", .{
circle.center_x,
circle.center_y,
circle.radius,
});
}
082_anonymous_structs3.zig
const print = @import("std").debug.print;
pub fn main() void {
// Implicit numbered field names: .0, .1, .2, .3
const foo = .{
true,
false,
@as(i32, 42),
@as(f32, 3.141592),
};
printTuple(foo);
}
fn printTuple(tuple: anytype) void {
// []const builtin.Type.StructField
const fields = @typeInfo(@TypeOf(tuple)).@"struct".fields;
// Iterate over each field
inline for (fields) |field| {
print("\"{s}\"({any}):{any} \n", .{
field.name,
field.type,
@field(tuple, field.name), // get value of field by name string
});
}
}
083_anonymous_lists.zig
const print = @import("std").debug.print;
pub fn main() void {
// Coerced tuple into array
const hello: [5]u8 = .{ 'h', 'e', 'l', 'l', 'o' };
print("I say {s}!\n", .{hello});
}
092_interfaces.zig
const std = @import("std");
// Each insect type has a print method
const Ant = struct {
still_alive: bool,
pub fn print(self: Ant) void {
std.debug.print("Ant is {s}.\n",
.{if (self.still_alive) "alive" else "dead"});
}
};
const Bee = struct {
flowers_visited: u16,
pub fn print(self: Bee) void {
std.debug.print("Bee visited {} flowers.\n",
.{self.flowers_visited});
}
};
const Grasshopper = struct {
distance_hopped: u16,
pub fn print(self: Grasshopper) void {
std.debug.print("Grasshopper hopped {} meters.\n", .{self.distance_hopped});
}
};
const Insect = union(enum) {
ant: Ant,
bee: Bee,
grasshopper: Grasshopper,
pub fn print(self: Insect) void {
switch (self) {
// At compiletime, generates a case
// for every member of Insect
// (compile error if a member doesn't
// have a print method)
inline else => |case| return case.print(),
}
}
};
pub fn main() !void {
const my_insects = [_]Insect{
Insect{ .ant = Ant{ .still_alive = true } },
Insect{ .bee = Bee{ .flowers_visited = 17 } },
Insect{ .grasshopper = Grasshopper{ .distance_hopped = 32 } },
};
std.debug.print("Daily Insect Report:\n", .{});
for (my_insects) |insect| {
Insect.print(insect);
}
}
093_hello_c.zig
const std = @import("std");
// @cImport parses an expression of C code and imports
// the functions, types, variables, and compatible
// macro definitions into a new empty struct type,
// and then returns that type.
const c: type = @cImport(
// Block of code passed as expression to @cImport
{
// Appends "#include <$path>\n" to the c_import temporary buffer
// (this function can only be called inside @cImport expression)
@cInclude("unistd.h");
}
);
pub fn main() void {
// Call the imported c function which has this Zig signature:
//
// pub extern fn write(
// _Filehandle: c_int,
// _Buf: ?*const anyopaque,
// _MaxCharCount: c_uint)
// c_int;
//
const c_res = c.write(2, "Hello C from Zig!", 17);
std.debug.print(" - C result is {d} chars written.\n", .{c_res});
}
// "-lc" tells Zig compiler to include C libraries
// e.g. "zig run -lc exercises/093_hello_c.zig".
094_c_math.zig
const std = @import("std");
const c = @cImport(
{
@cInclude("math.h");
}
);
pub fn main() !void {
const angle = 765.2;
const circle = 360;
// Call C mod function having this Zig signature:
//
// pub extern fn fmod(
// _X: f64,
// _Y: f64)
// f64;
//
const result = c.fmod(angle, circle);
std.debug.print(
"The normalized angle of {d: >3.1} degrees is {d: >3.1} degrees.\n",
.{ angle, result });
}
095_for3.zig
const std = @import("std");
pub fn main() void {
// range from 1 up to (but NOT including) 21
for (1..21) |n| {
if (n % 3 == 0) continue;
if (n % 5 == 0) continue;
std.debug.print("{} ", .{n});
}
std.debug.print("\n", .{});
}
096_memory_allocation.zig
const std = @import("std");
fn runningAverage(arr: []const f64, avg: []f64) void {
var sum: f64 = 0;
for (0.., arr) |index, val| {
sum += val;
const f_index: f64 = @floatFromInt(index + 1);
avg[index] = sum / f_index;
}
}
pub fn main() !void {
// Pretend this was defined by reading in user input
const arr: []const f64 = &[_]f64{ 0.3, 0.2, 0.1, 0.1, 0.4 };
// Initialize new arena allocator derived from std.heap.page_allocator
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
// Defer freeing the whole arena's memory
defer arena.deinit();
// Get Allocator from ArenaAllocator
// (ArenaAllocator is the specific allocator type, but we
// wrap as an Allocator to use the general Allocator functions)
const allocator = arena.allocator();
// Allocate a block of memory
// This call returns *[arr.len]f64, which we coerce to []f64.
const avg: []f64 = try allocator.create([arr.len]f64);
runningAverage(arr, avg);
std.debug.print("Running Average: ", .{});
for (avg) |val| {
std.debug.print("{d:.2} ", .{val});
}
std.debug.print("\n", .{});
}
// For more details on memory allocation and the different types of
// memory allocators, see https://www.youtube.com/watch?v=vHWiDx_l4V0
098_bit_manipulation2.zig
const std = @import("std");
const ascii = std.ascii;
const print = std.debug.print;
pub fn main() !void {
print("Is this a pangram? {}!\n",
.{isPangram("The quick brown fox jumps over the lazy dog.")});
}
fn isPangram(str: []const u8) bool {
if (str.len < 26) {
return false;
}
var bits: u32 = 0;
for (str) |c| {
if (ascii.isAscii(c) and ascii.isAlphabetic(c)) {
// |= performs a bitwise 'or'
//
// When shifting a u32 with <<, the right operand must be a u5,
// (because 2^5 is the number of bits in a u32)
// To make the right operand a u5, we use @truncate,
// which truncates the u8 to u5 (the return
// type is inferred from the context)
//
// Because bits is a u32, it can only be or'd with another
// u32. By itself, the integer literal could be any integer type,
// but @truncate needs it to be a u32 to correclty infer its return type.
bits |= @as(u32, 1) << @truncate(ascii.toLower(c) - 'a');
}
}
return bits == 0x3ff_ffff;
}
099_formatting.zig
const std = @import("std");
const print = std.debug.print;
pub fn main() !void {
const size = 15;
// header
print("\n |", .{});
for (0..size) |n| {
// format as a decimal (d), right-aligned (>),
// and minimum width 3
print("{d:>3} ", .{n + 1});
}
print("\n", .{});
// separator
var n: u8 = 0;
while (n <= size) : (n += 1) {
print("---+", .{});
}
print("\n", .{});
// rows
for (0..size) |a| {
// format as a decimal (d), right-aligned (>),
// and minimum width 2
print("{d:>2} |", .{a + 1});
for (0..size) |b| {
print("{d:3} ", .{(a + 1) * (b + 1)});
}
print("\n\n", .{});
}
}
100_for4.zig
const std = @import("std");
const print = std.debug.print;
pub fn main() void {
const hex_nums = [_]u8{ 0xb, 0x2a, 0x77 };
const dec_nums = [_]u8{ 11, 42, 119 };
// Iterate through both arrays in tandem
// (Allowed because both are same length)
for (hex_nums, dec_nums) |hex, dec| {
if (hex != dec) {
print("Uh oh! Found a mismatch: {d} vs {d}\n", .{ hex, dec });
return;
}
}
print("Arrays match!\n", .{});
}
101_for5.zig
const std = @import("std");
const print = std.debug.print;
const Role = enum {
wizard,
thief,
bard,
warrior,
};
pub fn main() void {
const roles = [4]Role{ .wizard, .bard, .bard, .warrior };
const gold = [4]u16{ 25, 11, 5, 7392 };
const experience = [4]u8{ 40, 17, 55, 21 };
// Iterate over multiple arrays in tandem (sizes must match)
// The range size will automatically match the others.
for (roles, gold, experience, 1..) |c, g, e, i| {
const role_name = switch (c) {
.wizard => "Wizard",
.thief => "Thief",
.bard => "Bard",
.warrior => "Warrior",
};
std.debug.print("{d}. {s} (Gold: {d}, XP: {d})\n", .{
i,
role_name,
g,
e,
});
}
}
102_testing.zig
#![allow(unused)]
fn main() {
// execute with `zig test` instead of `zig run`
const std = @import("std");
const testing = std.testing;
fn add(a: f16, b: f16) f16 {
return a + b;
}
// A test fails if it returns an error.
test "add" {
try testing.expect(add(41, 1) == 42);
try testing.expectEqual(42, add(41, 1));
try testing.expect(add(5, -4) == 1);
try testing.expect(add(1.5, 1.5) == 3);
}
fn sub(a: f16, b: f16) f16 {
return a - b;
}
test "sub" {
try testing.expect(sub(10, 5) == 5);
try testing.expect(sub(3, 1.5) == 1.5);
}
fn divide(a: f16, b: f16) !f16 {
if (b == 0) return error.DivisionByZero;
return a / b;
}
test "divide" {
try testing.expect(divide(2, 2) catch unreachable == 1);
try testing.expect(divide(-1, -1) catch unreachable == 1);
try testing.expect(divide(10, 2) catch unreachable == 5);
try testing.expect(divide(1, 3) catch unreachable == 0.3333333333333333);
try testing.expectError(error.DivisionByZero, divide(15, 0));
}
}
103_tokenization.zig
const std = @import("std");
const print = std.debug.print;
pub fn main() !void {
// Multi-line string
const poem =
\\My name is Ozymandias, King of Kings;
\\Look on my Works, ye Mighty, and despair!
;
// Returns a TokenIterator(u8, DelimiterType.any),
// which splits the poem by the delimiters
const delimiters = ",\n ;!";
var it = std.mem.tokenizeAny(u8, poem, delimiters);
var cnt: usize = 0;
while (it.next()) |word| {
cnt += 1;
print("{s}\n", .{word});
}
print("This little poem has {d} words!\n", .{cnt});
}
104_threading.zig
const std = @import("std");
const total_time = 5;
pub fn main() !void {
std.debug.print("Starting work...\n", .{});
// Create a block so that the defers inside exit just this subscope.
{
// Spawn thread with parameter value 1
const handle = try std.Thread.spawn(.{}, thread_function, .{1});
// join() waits for thread to complete, then cleans up the thread
defer handle.join();
// Spawn thread with parameter value 2
const handle2 = try std.Thread.spawn(.{}, thread_function, .{2});
defer handle2.join();
// Spawn thread with parameter value 3
const handle3 = try std.Thread.spawn(.{}, thread_function, .{3});
defer handle3.join();
// While the threads spawned above run, we can do
// other business on main thread...
// (though in this case we're just sleeping for total_time seconds)
var io_instance: std.Io.Threaded = .init_single_threaded;
const io = io_instance.io();
try io.sleep(std.Io.Duration.fromSeconds(total_time), .awake);
std.debug.print("main thread: finished.\n", .{});
}
// only reach here after all the joins
std.debug.print("Zig is cool!\n", .{});
}
// When used as function for new thread, the thread param is passed to 'delay'
fn thread_function(delay: usize) !void {
// .init_single_threaded is an instance of the file struct but with
// member values that differ from the default
var io_instance: std.Io.Threaded = .init_single_threaded;
const io = io_instance.io();
// Sleep to delay start
// (isize is like usize but signed)
// (sleep expects a std.Io.Clock enum value,
// so .awake is understood as std.Io.Clock.awake)
const seconds = 1 * @as(isize, @intCast(delay));
try io.sleep(std.Io.Duration.fromSeconds(seconds), .awake);
// Print message after delay
std.debug.print("thread {d}: {s}\n", .{ delay, "started." });
// Sleep for the rest of the total_time
const work_time = total_time - delay;
try io.sleep(std.Io.Duration.fromSeconds(@intCast(work_time)), .awake);
std.debug.print("thread {d}: {s}\n", .{ delay, "finished." });
}
105_threading2.zig
const std = @import("std");
pub fn main() !void {
const count = 1_000_000_000;
var pi_plus: f64 = 0;
var pi_minus: f64 = 0;
// We pass pointers to these threads
{
const handle1 = try std.Thread.spawn(.{}, thread_pi,
.{ &pi_plus, 5, count });
defer handle1.join();
const handle2 = try std.Thread.spawn(.{}, thread_pi,
.{ &pi_minus, 3, count });
defer handle2.join();
}
// We're reading value from pointer that was computed by the spawned threads
// (safe here because the two threads have been joined already)
std.debug.print("PI ≈ {d:.8}\n", .{4 + pi_plus - pi_minus});
}
// Receives pointer (necessary to get result back on main therad)
fn thread_pi(pi: *f64, begin: u64, end: u64) !void {
var n: u64 = begin;
while (n < end) : (n += 4) {
pi.* += 4 / @as(f64, @floatFromInt(n));
}
}
106_files.zig
const std = @import("std");
// std.process.Init represents initial state of the process
pub fn main(init: std.process.Init) !void {
// Get standard output
const io: std.Io = init.io;
// Get current working directory
const cwd: std.Io.Dir = std.Io.Dir.cwd();
cwd.createDir(io, "output", .default_dir) catch |e| switch (e) {
error.PathAlreadyExists => {}, // if error.PathAlreadyExists, do nothing
else => return e, // propagate other errors
};
const output_dir: std.Io.Dir = try cwd.openDir(io, "output", .{});
defer output_dir.close(io);
const file: std.Io.File = try output_dir.createFile(io, "zigling.txt", .{});
defer file.close(io);
var file_writer = file.writer(io, &.{});
// We made file_writer a var instead of a const so that
// the & operator here returns a *Io.Writer instead
// of a *const Io.Writer
// (The write function expects a *Io.Writer)
const writer = &file_writer.interface;
const byte_written = try writer.write("It's zigling time!");
std.debug.print("Successfully wrote {d} bytes.\n", .{byte_written});
}
107_files2.zig
const std = @import("std");
pub fn main(init: std.process.Init) !void {
const io = init.io;
const cwd = std.Io.Dir.cwd();
var output_dir = try cwd.openDir(io, "output", .{});
defer output_dir.close(io);
const file = try output_dir.openFile(io, "zigling.txt", .{});
defer file.close(io);
var content = [_]u8{'A'} ** 64;
// This should print out :
// `AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA`
std.debug.print("{s}\n", .{content});
var file_reader = file.reader(io, &.{});
const reader = &file_reader.interface;
// Reads data from the file into the content array
// Returns the number of bytes read
const bytes_read = try reader.readSliceShort(&content);
std.debug.print("Successfully Read {d} bytes: {s}\n", .{
bytes_read,
content[0..bytes_read],
});
}
108_labeled_switch.zig
const std = @import("std");
const PullRequestState = enum(u8) {
Draft,
InReview,
Approved,
Rejected,
Merged,
};
pub fn main() void {
// Label on switch allows break and continue
// break = jump out of the switch
// continue = jump to start of the switch with a new value to switch on
// (effectively, a continue jumps to a different case)
pr: switch (PullRequestState.Draft) {
PullRequestState.Draft => continue :pr PullRequestState.InReview,
PullRequestState.InReview => continue :pr PullRequestState.Approved,
PullRequestState.Approved => continue :pr PullRequestState.Merged,
PullRequestState.Rejected => {
std.debug.print("The pull request has been rejected.\n", .{});
return;
},
PullRequestState.Merged => break :pr,
}
std.debug.print("The pull request has been merged.\n", .{});
}
109_vectors.zig
// @Vector returns a vector type
// (The size of vector is not strictly limited, but in practice
// you rarely want vectors of more than 4 elements.)
// The @Vector() expression returns a vecotr type, but then the {} after
// makes thse literals.
// So v1 is assigned a vector of 3 i32s, with the values 1, 10, 100
const v1 = @Vector(3, i32){ 1, 10, 100 };
// v2 is assigned a vector of 3 f32s, with the values 2.0, 3.0, 5.0
const v2 = @Vector(3, f32){ 2.0, 3.0, 5.0 };
// Component-wise addition and multiplication
const v3 = v1 + v1; // { 2, 20, 200};
const v4 = v2 * v2; // { 4.0, 9.0, 25.0};
// Cast components of vector of i32 into a vector of f32
const v5: @Vector(3, f32) = @floatFromInt(v3); // { 2.0, 20.0, 200.0}
// Component-wise subtraction
const v6 = v4 - v5; // { 2.0, -11.0, -175.0}
// Component-wise absolute values
const v7 = @abs(v6); // { 2.0, 11.0, 175.0}
// @splat(2) returns a vector length is inferred from context and
// where every element is the argument
// So v8 is assigned a vector of 4 u8s, with the values 2, 2, 2, 2
const v8: @Vector(4, u8) = @splat(2); // { 2, 2, 2, 2}
// @reduce invokes the specified operation on each successive pair,
// producing a scalar result
const v8_sum = @reduce(.Add, v8); // 8, the result of 2 + 2 + 2 + 2
const v8_min = @reduce(.Min, v8); // 2, the result of min(min(min(2, 2), 2), 2)
// Fixed-length arrays can be automatically assigned to vectors (and vice-versa).
const single_digit_primes = [4]i8{ 2, 3, 5, 7 };
const prime_vector: @Vector(4, i8) = single_digit_primes;
// A calculation with arrays instead of vectors
fn calcMaxPairwiseDiffOld(list1: [4]f32, list2: [4]f32) f32 {
var max_diff: f32 = 0;
for (list1, list2) |n1, n2| {
const abs_diff = @abs(n1 - n2);
if (abs_diff > max_diff) {
max_diff = abs_diff;
}
}
return max_diff;
}
// Define Vec4 as a vector type of 4 f32s
const Vec4 = @Vector(4, f32);
// Same as prior function, but uses vectors
fn calcMaxPairwiseDiffNew(a: Vec4, b: Vec4) f32 {
const abs_diff_vec = @abs(a - b);
const max_diff = @reduce(.Max, abs_diff_vec);
return max_diff;
}
const std = @import("std");
const print = std.debug.print;
pub fn main() void {
const l1 = [4]f32{ 3.141, 2.718, 0.577, 1.000 };
const l2 = [4]f32{ 3.154, 2.707, 0.591, 0.993 };
const mpd_old = calcMaxPairwiseDiffOld(l1, l2);
const mpd_new = calcMaxPairwiseDiffNew(l1, l2);
print("Max difference (old fn): {d: >5.3}\n", .{mpd_old});
print("Max difference (new fn): {d: >5.3}\n", .{mpd_new});
}
Introduction to Jujutsu Version Control
This text is a supplement to a video about the Jujutsu version control system.
Pros and cons
Jujutsu, otherwise known as JJ, is a version control system that has a number of notable advantages over Git:
- JJ commit history can be (psuedo-) mutated, even though the commits themselves are strictly immutable. Mutable history makes certain workflows much easier, such as stacked diffs.
- JJ stores conflicts as commit meta-data, which can make conflicts easier to resolve in merges.
- JJ has no concept of an index, so commits do not have to be staged. Instead, running any JJ command will commit any dirty changes in your working copy before doing anything else. Effectively, you never need to stash, and you never get stuck in the middle of a merge: you can always just switch away to any branch at any time without losing any work.
On the other hand, Jujutsu arguably has some drawbacks:
- JJ is still pre-1.0 release. Though generally reliable already, the command line interface has not been entirely stable.
- JJ doesn’t currently support git-lfs, so it may not be a practical choice for projects with numerous or large binary files.
Backends and Git interop
JJ is architected to support swappable storage backends, but the only backend fully supported at the moment is Git itself. When using this backend, a .jj repo directory and a .git repo directory sit side-by-side in the root of your working copy, and every JJ commit is stored as a Git commit, along with some additional JJ meta-data. Likewise, the full command history of your JJ repo is stored as additional Git commits.
When cloning from or synching through a remote repo, the remote is just a regular Git repo with no knowledge of JJ. The Git commits contain all the JJ meta-data, so JJ users can sync just with push and fetch. In fact, you can clone any Git repo and work with it on your own as a JJ repo, even if other users of the Git repo do not.
Data model
The JJ model differs from Git in several ways:
- JJ commits can be hidden: every JJ commit has a visibility flag, and various commands will toggle this flag. What it means to be hidden is simply that hidden commits will be omitted by default when displaying history with commands like
jj log. This is helpful for users because it removes clutter from the history, particularly when old commits get logically replaced by newer versions. - The operations log is an immutable history that tracks every command which modifies the state of the repo. With each command, the log records the set of commits that were visible after the command executed, and this allows the repo to be easily and quickly restored back to any prior state. The commands which restore an old state are themselves appended to the log and never actually create or delete any commits: instead, an old state is restored simply by toggling which commits are visible.
- In addition to the normal Git content hash commit ids, JJ commits also have a change id (henceforth, “changeid”). These changeids are represented with only lowercase English letters, and they are either randomly generated or inherited from a prior commit (depending upon the operation that creates them). Most of the time, a repo will have one visible commit at a time per individual changeid, but there are scenarios where a repo may have multiple visible commits simultaneously for an individual changeid. This situation is called a “divergent change”. While divergent changes are not error states, per se, they do make your commit history a bit confusing, so normally you’ll rectify the situation in one of a few ways, such as by hiding all but one of the commits or maybe by merging the divergent changes together.
- JJ can store conflict meta-data in commits. For example, if a commit has two parents with a conflict in a certain file, the irreconcilable difference is stored in the commit alongside each parent’s version of the file. Commits with these conflicts are marked in the log. Like divergent changes, conflict commits in your repo are not an error state, per se, but normally you’ll want to rectify the situation by making new child commits that resolve the conflicts, or alternatively, you might simply hide a commit with conflicts (which may be appropriate if you just want to abandon a merge).
- Instead of branches, JJ has what it calls bookmarks, though they are basically the same thing. The main difference is that JJ has no concept of a ‘current bookmark’, and bookmarks do not automatically advance the way Git branches do. It’s common in JJ to locally track different “branches” of work by changeids rather than with bookmarks, so bookmarks are mostly used in JJ just to sync with remote branches. If a bookmark that tracks a remote branch somehow ends up out of sync with the remote branch (say, because the bookmark was manually moved), then the bookmark becomes conflicted. These bookmark conflicts can be resolved by the user simply getting the bookmark and its tracked remote back in sync.
- Whereas a Git commit can have at most two parents, a JJ commit can have any number of parents. More than two parents can be useful for cases where you want to do a multi-way merge: instead of having to merge multiple pairs, you can just merge everything together directly in one operation. Along with JJ’s conflict meta-data, this can help reduce the number of conflicts you must manually resolve.
Demo of basic operations
$ mkdir test-proj
$ cd test-proj
$ jj git init # create .jj and .git subdirs
$ jj log # view history
$ touch apple # make new file
$ jj # commit (if dirty working copy)
$ jj log # show history
$ jj edit 5d11 # switch to commit with id 5d11…
$ touch orange # create a new file
$ jj edit a94 # commit working directory, then switch
# back to commit with id a94…
Note
Note that the
jj gitsubcommand has several of its own subcommands for working with Git remotes and the underlying Git repo, but despite some of these subcommands sharing the same name as Git subcommands (such asjj git init,jj git clone,jj git fetch, andjj git push), you should think of these JJ Git subcommands as distinct from actual Git’s own subcommands. Also be clear that the underlying dot Git repo can be operated upon directly with Git just like any normal Git repo. Just keep in mind you may need to sync changes made directly through Git back to the JJ repo with the commandjj git import. Normally though, you’ll manage a JJ repo through JJ itself rather than use Git directly.
Unlike Git, JJ does not have a concept of a current, checked out branch: instead, there is simply a current commit, and when we want to switch our working copy to another commit, we specify a commit id with the jj edit subcommand. So say, assuming there is a commit id that uniquely starts with 5d11, we can switch our working copy to this commit with the command jj edit 5d11
To create a commit, we simply make changes to our working copy and then run JJ. Any time JJ runs, no matter the subcommand, it will make a new commit for any dirty changes in the working copy directory. So say, if we create a new file and then switch again to a different commit with jj edit, JJ will first make a new commit before switching. Effectively, no matter the state of your repo and working copy, you can always switch away at any time.
Tip
If you don’t like any changes that get committed, there are commands to easily remove them from your visible commit history. However, if you want to completely remove certain commits from your repo, you may have to fall back on manipulating the underlying Git repo state directly wtih
gitcommands.
Most common commands
jj git init= initialize a new local repo (creates both.gitand.jjdirs)jj git clone= clone a git repo (creates both.gitand.jjdirs)jj git fetch= copy commits from the remotejj git push= copy commits to the remote (possibly updates remote branches)
jj log= show repo historyjj new= create a new empty commit with a new changeidjj abandon= remove a commit from history (i.e. hide the commit and revise its ancestors to remove its changes)jj squash= move file changes from one changeid to another existing changeidjj split= move file changes from one changeid to a new changeid
jj operation log= show the operations logjj operation restore= restore repo state to the state immediately after a particular operationjj operation undo= restore repo state to the state immediately before the last operation
Common points of confusion
What is a change?
When we talk about a “change” in JJ, it’s not always clear if we’re talking about a changeid or if we’re talking about a commit that has a particular commit id. More confusingly, the term “change” can also refer to the actual file content changes of a commit relative to its parents, and the JJ documentation is not always clear about this distinction. These ambiguities would have been avoided if “change ids” were instead named something else, like perhaps, “revision ids”, but as it is, we have to be careful when we say “change”.
Incremental commits are replacements, not descendants
In typical Git usage, people often create incremental commits as they work on a branch, and this creates a chain of parentage: each new commit is a child of the prior.
In JJ, incrementally commiting dirty working changes produces commits that logically replace the prior commit in the history: the new commit has the same changeid and parents as the prior commit, and the prior commit becomes hidden.
You can always recover hidden commits, so you shouldn’t worry about unrecoverable state, but to create a sort of ‘checkpoint’ in your development, the usual practice is to run the jj new command. This will create a new commit that:
- has a new, random changeid
- is a child of the prior commit
- is empty (represents no file changes relative to the parent)
In a sense, this marks in your histroy that you are starting on the next phase of development, such as fixing the next bug or implementing the next feature.
Note
Pretty much all JJ commands create ‘replacement’ commits rather than extend the chain of parentage. The exceptions are the commands that create new changeids:
jj newandjj split.
Restoring old operations does not delete log entires or commits
When restoring an old repo state with the jj operation restore or jj undo commands, the visibility of relevant commits may be toggled, and these commands are appended to the operation log.
However, the only way operation log entries or commits get truly deleted from a repo is by running the jj util gc command. This command deletes entries from the operations log that are older than a certain threshold (default of 2 weeks) and also deletes the commits that were only relevant to the pruned entries.
The current empty commit gets automatically hidden when switching away
If your current commit is empty (meaning it represents no file changes relative to its parents), JJ will hide the commit if you switch away to another commit. This may happen not just if you run jj edit but if you run any command that switches you to a different commit.
Transforms

Open GL

- OpenGL (work in progress)
Games - Mics

Unity
- intro to C# and the Unity game engine
- Unity Job System, Unity ECS (part 1), Unity ECS (part 2)
- (deprecated) Unity ECS and Job system (out of date text and videos)
game projects
Object-Oriented Programming is Bad
A later follow-up:
Japanese Vocabulary: Drilling and Acquisition

The goal of drilling
The goal of drilling is to familiarize yourself with words, not to master the words or even attain reliable, conscious recall of the words.
How much time should you spend drilling each day?
Drilling should generally take no more than 20 or 30 minutes per day. Any more time drilling has greatly diminishing effectiveness (due to mental fatigue) and crowds out other forms of language practice that should have higer priority (listening, reading, and speaking).
How much should you drill an individual word?
Because the goal of drilling is merely to familiarize yourself with words rather than master them, you should only drill an individual word several times before removing it from your pool, regardless of how well you have learned the word. (If you later reencounter the word, you may re-add it to your pool if you think it worth the additional effort.)
Which words should you add to your pool?
For the first few months of learning, it makes sense to add words to your pool from lists of the most commonly used words. However, once you’re comfortable with the top 500 or 1000 frequently occuring words, you should avoid drilling words from lists. Instead, you should focus on words that you organically encounter in listening and reading.
Maintaining the word pool at a target size
If your word pool has too few words, then random selection will pick the same words too frequently. If your word pool has too many words, then random selection will not pick an individual word frequently enough. Thus it’s important to maintain the pool at a certain size. How large exactly? Assuming you drill 100 unique words each day, then a pool size of 300 means that each individual word in your pool will be drilled, on average, once very three days, which is about the right pace.
So for individual words to show up in your drills every, say, three or four days, your pool size should be three or four times the number of words that you drill per day. These numbers determine how many words will typically be removed from your pool per day, which then tells you how many words you should add to maintain the pool at the target size. E.g. if 20 words are removed per day, then you should add 20 words per day (unless, of course, you are far above or below the target size for whatever reason, in which case you should adjust accordingly).
Run the numbers
For reference, if you drill 30 minutes a day, spending 10 seconds on each word, that gives you enough time to drill 180 words. If you want individual words to show up in your drills once every 3 days, then you need a pool of 540 words. If each word is removed from your pool after 7 drills, that means that, on average, 25.7 words will be removed from your pool (because 180 / 7 == 25.7), and so you must add 25.7 words each day to maintain a pool of 540 words. (Or another way to calculate it: words will typically remain in your pool for 21 days, so you need to add enough words to replace your full pool every 21 days, and 540 / 21 is also 25.7.)
Because finding words to add is its own burden on top of the actual drilling, it’s important to get this balance right. My personal preference is:
- target drills per word = 7
- target size of pool = 300
- unique words drilled per day = 100
- words added to pool per day = ~14
This means I spend about 15 minutes drilling each day, and as long as I also do enough reading or listening per day, it’s easy to collect sufficient new words without much extra effort. On days where I have extra time or feel extra motivated, I’ll drill the same set of 100 words twice (spaced several hours apart). The second pass tends to go faster, so this is generally less than 10 extra minutes.
Is this a fast or slow way to learn Japanese?
If I’m typically adding and removing 14 words per day, that means I’m covering about 420 words per month and 2880 per year. Over 4 years (the commonly expected amount of time for learning Japanese to a decent level), this adds up to 11,520 words drilled. So is this rate slow or fast?
Well, on the one hand, covering nearly 12,000 words represents a good chunk of any language, especially if you focus on the words that most frequently occur across the whole language and words from the domains that matter most to you (e.g. baseball terminology if you care about baseball). On the other hand, a typical adult native Japanese speaker’s vocabulary is estimated to be around 40,000 words (though the question of what properly constitutes a unique, individual word is messier in Japanese than in English). Also, recall that this drilling process is merely meant to familiarize ourselves with the words, and we’re very unlikely to fully learn most of the words we drill after just several exposures. However:
- Drilling is just a supplement for the essential forms of practice: reading, listening, and speaking. If you devote sufficient time and effort into these activities, you will master many vocabulary and kanji through naturally reoccuring, meaningful encounters, assisted by the base level of familiarity you attain through drilling.
- Vocabulary in every language, including Japanese, is not just a set of independent, arbitrary mappings of sign to signified (though it definitely seems that way in the early stages of learning). As you advance, you will fined that knowledge and mastery of many words help reinforce and unlock other words in the language, thanks to common patterns of word formation and reuse of common elements (e.g. prefixes and suffixes). Effectively, once you have a base vocabulary, it becomes easier to organically acquire an intermediate vocabulary, which in turn helps you organically acquire an advanced vocabulary.
So yes, acquiring just a loose familiarity with fewer than 12,000 words over 4 years would not be a great end result. In practice, though, drilling can help you achieve far more than this as long as you use drilling as just a supplement to the core forms of language practice.
The drilling process
Once a set of words is randomly selected from the pool for a drill session, the drilling process proceeds in rounds:
- For the first round, randomly select ~10 to ~20 words from the set.
- For each word, say the answer aloud or internally, then check the answer. If your answer was right, you remove the word from the set. If your answer was wrong, you include the word in the next round.
- Each subsequent round, include the wrong-answered words from the prior round and then randomly select more words from the remaining set to fill out the round (up to a max of ~10 to ~20).
- The drilling ends when all words have been removed from the set.
Once a word is correctly answered and removed from the set, the word’s lifetime drill count should be incremented, and if the word hits the max drill count target, it should be removed from the word pool.
Note
It’s fine and arguably beneficial if each round sorts all of the wrong-answered words to the front of the list: this allows you to focus a bit more on the words giving you trouble before dealing with the words that are new that round. Also note that the size of each round effects how frequently the wrong-answered words reoccur in between new words: the fewer words per round, the smaller the ratio of new words relative to wrong-answered words carried over from the prior round, and thus you might be less distracted by new words in between repetitions of the wrong-answered words.
Misc
Less notable things that didn’t fit elsewhere: