Advent of Code 2024: Day 2 - When Languages Shape Our Thinking
After yesterday's adventure with sorting columns (and exploring idiomatic solutions), today's challenge presented an intriguing twist. Validating reactor safety reports might sound straightforward, but it quickly reveals its complexity when you factor in all the edge cases and that clever "Problem Dampener" feature introduced in part 2.
Here I am again, with my four trusty companions(or four horsemen?) Elixir
, Rust
, Go
, and Haskell
. But today's story isn't about which language has the prettiest syntax or the most elegant solution. It's about how each language made me think about the problem in completely different ways, and learning some fascinating things about sequence validation along the way!
The Challenge: Reactor Safety Validation
The problem today involves validating reactor safety sequences. A sequence is considered safe if:
- It's either strictly increasing or strictly decreasing
- Adjacent numbers must differ by at least 1 but no more than 3
- The sequence must contain at least 2 numbers
Part 2 introduces the "Problem Dampener" - a feature that allows removing exactly one number from the sequence to potentially make it safe. This adds an interesting twist to our validation logic, as we need to consider all possible single-number removals.
Example:
Safe sequence: [1, 3, 6] (strictly increasing, differences: 2, 3)
Unsafe sequence: [1, 3, 2, 4] (not strictly increasing/decreasing)
Dampened sequence: [1, 3, 2, 4] -> [1, 3, 4] (becomes safe after removing 2)
The Core Algorithm: Four Ways to Skin a Cat
Before we dive into the "Problem Dampener", let's talk about how each language influenced my approach to the basic validation. This is where things get really interesting.
In Rust, I found myself thinking in terms of iterators and early returns:
impl Report {
fn is_safe(&self) -> bool {
if self.levels.len() < 2 {
return false;
}
let mut iter = self.levels.windows(2);
let mut increasing = true;
let mut decreasing = true;
while let Some(&[a, b]) = iter.next() {
let diff = b - a;
match diff.signum() {
-1 => increasing = false,
1 => decreasing = false,
_ => return false, // Equal numbers aren't allowed
}
if diff.abs() < 1 || diff.abs() > 3 {
return false;
}
}
increasing || decreasing
}
}
Look at that windows(2)
iterator - it's Rust's way of saying "here's a clean, zero-cost abstraction for looking at adjacent pairs." No allocations, no fuss, just pure efficiency. And those early returns? They're not just for show - they prevent unnecessary computation the moment we know our sequence is invalid.
Elixir, on the other hand, had me thinking in transformations:
def is_safe?(levels) when length(levels) < 2, do: false
def is_safe?(levels) do
differences =
levels
|> Enum.zip(tl(levels))
|> Enum.map(fn {a, b} -> b - a end)
valid_diffs? = Enum.all?(differences, &(&1 |> abs() |> Kernel.in(1..3)))
increasing? = Enum.all?(differences, &(&1 > 0))
decreasing? = Enum.all?(differences, &(&1 < 0))
valid_diffs? and (increasing? or decreasing?)
end
See how different this feels? Instead of checking conditions as we go, we're transforming the data in discrete steps: first create pairs, then compute differences, then check our conditions. It's more declarative - we're saying what we want, not how to do it.
Go: Pragmatic Simplicity
type Report struct {
levels []int
}
func (r Report) isSafe() bool {
if len(r.levels) < 2 {
return false
}
increasing, decreasing := true, true
for i := 1; i < len(r.levels); i++ {
diff := r.levels[i] - r.levels[i-1]
absDiff := diff
if diff < 0 {
absDiff = -diff
increasing = false
} else {
decreasing = false
}
if absDiff < 1 || absDiff > 3 {
return false
}
}
return increasing || decreasing
}
// Part 2: Problem Dampener
func (r Report) canBeSafe() bool {
if r.isSafe() {
return true
}
for i := range r.levels {
dampened := make([]int, 0, len(r.levels)-1)
dampened = append(dampened, r.levels[:i]...)
dampened = append(dampened, r.levels[i+1:]...)
if Report{levels: dampened}.isSafe() {
return true
}
}
return false
}
Haskell: Pure Functional Elegance
module Report where
import Data.List (tails)
isSafe :: [Int] -> Bool
isSafe xs
| length xs < 2 = False
| otherwise = validDiffs && (increasing || decreasing)
where
pairs = zip xs (tail xs)
diffs = map (uncurry subtract) pairs
validDiffs = all (\d -> abs d >= 1 && abs d <= 3) diffs
increasing = all (> 0) diffs
decreasing = all (< 0) diffs
-- Part 2: Problem Dampener
canBeSafe :: [Int] -> Bool
canBeSafe xs = isSafe xs || any isSafe (removals xs)
where
removals xs = [take i xs ++ drop (i + 1) xs | i <- [0..length xs - 1]]
Implementation Deep Dive
Each language's implementation reveals its unique strengths:
Rust: Zero-Cost Abstractions
impl Report {
fn is_safe(&self) -> bool {
if self.levels.len() < 2 {
return false;
}
let diffs: Vec<_> = self.levels.windows(2)
.map(|w| w[1] - w[0])
.collect();
let valid_diffs = diffs.iter()
.all(|&d| (1..=3).contains(&d.abs()));
let monotonic = diffs.iter()
.all(|&d| d > 0) ||
diffs.iter().all(|&d| d < 0);
valid_diffs && monotonic
}
// Part 2: Problem Dampener
fn can_be_safe(&self) -> bool {
if self.is_safe() {
return true;
}
// Try removing each number and check if sequence becomes safe
(0..self.levels.len()).any(|i| {
let mut dampened = self.levels.clone();
dampened.remove(i);
Report { levels: dampened }.is_safe()
})
}
}
Elixir: Elegant Transformations
defmodule Report do
def is_safe?(levels) when length(levels) < 2, do: false
def is_safe?(levels) do
differences =
levels
|> Enum.zip(tl(levels))
|> Enum.map(fn {a, b} -> b - a end)
valid_diffs? = Enum.all?(differences, &(&1 |> abs() |> Kernel.in(1..3)))
monotonic? = Enum.all?(differences, &(&1 > 0)) or
Enum.all?(differences, &(&1 < 0))
valid_diffs? and monotonic?
end
# Part 2: Problem Dampener
def can_be_safe?(levels) do
is_safe?(levels) or
0..(length(levels) - 1)
|> Enum.any?(fn i ->
levels
|> List.delete_at(i)
|> is_safe?()
end)
end
end
Error Handling: The Art of Failing Gracefully
Error handling is the unsung hero of programming. It's easy to overlook, but each language has its own way of making us think about errors. Let's take a look.
Rust's type system is like a safety net, catching errors before they even happen:
#[derive(Debug, thiserror::Error)]
enum Error {
#[error("failed to parse number: {0}")]
Parse(String),
#[error("invalid sequence length")]
InvalidLength,
#[error("invalid difference between numbers")]
InvalidDifference,
}
impl FromStr for Report {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let levels: Result<Vec<_>, _> = s
.split_whitespace()
.map(str::parse::<i32>)
.collect();
levels
.map(|l| Report { levels: l })
.map_err(|e| Error::Parse(e.to_string()))
.and_then(|r| {
if r.levels.len() < 2 {
Err(Error::InvalidLength)
} else {
Ok(r)
}
})
}
}
Every possible error is accounted for, and the compiler ensures we don't forget any. It's like having a personal error handler, always watching our backs.
Elixir's approach is more... conversational. We use the with
expression to tell a story of how things should go, and handle errors as they arise:
def parse(input) when is_binary(input) do
with {:ok, numbers} <- split_and_parse(input),
{:ok, report} <- validate_length(numbers),
{:ok, _} <- validate_differences(numbers) do
{:ok, numbers}
else
{:error, :invalid_format} ->
{:error, "Input must be space-separated numbers"}
{:error, :invalid_length} ->
{:error, "Sequence must have at least 2 numbers"}
{:error, :invalid_difference} ->
{:error, "Adjacent numbers must differ by 1-3"}
end
end
It's not just about handling errors; it's about expressing the happy path clearly and handling errors separately. We're telling a story of how things should go, and what to do when they don't.
Language Features and Philosophies
Elixir: Functional and Declarative
Elixir encourages thinking in terms of data transformations and pipelines. Its functional nature shines through in how problems are broken down into small, composable functions:
def is_safe?(levels) when length(levels) < 2, do: false
def is_safe?(levels) do
differences =
levels
|> Enum.zip(tl(levels))
|> Enum.map(fn {a, b} -> b - a end)
valid_diffs? = Enum.all?(differences, &(&1 |> abs() |> Kernel.in(1..3)))
increasing? = Enum.all?(differences, &(&1 > 0))
decreasing? = Enum.all?(differences, &(&1 < 0))
valid_diffs? and (increasing? or decreasing?)
end
The use of pipelines makes the code readable and expressive, focusing on the "what" rather than the "how."
Go: Simple and Efficient
Go's solution is straightforward, emphasizing simplicity and performance. It uses imperative loops and state management through boolean flags:
func (r Report) isSafe() bool {
var increasing, decreasing bool = true, true
prev := r.levels[0]
for _, curr := range r.levels[1:] {
diff := curr - prev
if diff <= 0 { increasing = false }
if diff >= 0 { decreasing = false }
if absDiff := abs(diff); absDiff < 1 || absDiff > 3 {
return false
}
prev = curr
}
return increasing || decreasing
}
Go's pragmatism is evident in its focus on clear, efficient solutions.
Haskell: Elegant and Concise
Haskell leverages its functional nature to provide a concise and elegant solution. It uses high-level functions and list operations:
isSafe :: Report -> Bool
isSafe (Report xs)
| length xs < 2 = False
| otherwise =
(all (> 0) diffs || all (< 0) diffs)
&& all isValidDiff diffs
where
diffs = differences xs
Haskell's declarative style allows us to express the solution almost as a direct translation of the problem statement.
Rust: Safe and Performant
Rust combines safety and performance with its strong type system and ownership model. It uses iterators and window operations to achieve a clean solution:
impl Report {
fn is_safe(&self) -> bool {
if self.levels.len() < 2 {
return false;
}
let diffs: Vec<_> = self.levels.windows(2)
.map(|w| w[1] - w[0])
.collect();
(diffs.iter().all(|&x| x > 0) ||
diffs.iter().all(|&x| x < 0)) &&
diffs.iter().all(|&x| (1..=3).contains(&x.abs()))
}
}
Rust's zero-cost abstractions and compile-time checks ensure both efficiency and safety.
Performance and Implementation Analysis
Each language's approach to data structures and algorithms reveals interesting trade-offs between clarity, performance, and memory usage.
Data Structures and Memory Management
Rust: Zero-Cost Abstractions
impl Report {
fn is_safe(&self) -> bool {
if self.levels.len() < 2 {
return false;
}
let diffs: Vec<_> = self.levels.windows(2)
.map(|w| w[1] - w[0])
.collect();
(diffs.iter().all(|&x| x > 0) ||
diffs.iter().all(|&x| x < 0)) &&
diffs.iter().all(|&x| (1..=3).contains(&x.abs()))
}
}
Rust's implementation showcases its strength in memory management. The core of our data lives in a Vec<i32>
, giving us the performance of contiguous memory. When we need to process pairs of numbers, the windows()
iterator steps in, providing a view into our data without copying. While we do allocate a new vector for differences, it's a single, controlled allocation that Rust's ownership system will clean up the moment we're done with it. The processing happens in stack-efficient chunks, making the most of CPU cache lines.
Go: Pragmatic Efficiency
func (r Report) isSafe() bool {
var increasing, decreasing bool = true, true
prev := r.levels[0]
for _, curr := range r.levels[1:] {
diff := curr - prev
if diff <= 0 { increasing = false }
if diff >= 0 { decreasing = false }
if absDiff := abs(diff); absDiff < 1 || absDiff > 3 {
return false
}
prev = curr
}
return increasing || decreasing
}
Go's approach is straightforward. It works directly with a slice, which is just a window into an array with a length and capacity. There's no allocation during processing – we just keep track of our state with two boolean flags and a previous value. The slice header sits on the stack while the backing array lives on the heap, but Go's garbage collector handles this seamlessly. The early returns aren't just for readability; they help us avoid unnecessary computation the moment we know our answer.
Elixir: Immutable Transformations
def is_safe?(levels) do
differences =
levels
|> Enum.zip(tl(levels))
|> Enum.map(fn {a, b} -> b - a end)
valid_diffs? = Enum.all?(differences, &(&1 |> abs() |> Kernel.in(1..3)))
increasing? = Enum.all?(differences, &(&1 > 0))
decreasing? = Enum.all?(differences, &(&1 < 0))
valid_diffs? and (increasing? or decreasing?)
end
Elixir embraces immutability, and it shows in how we handle data. Each transformation creates a new list, but this isn't as wasteful as it might sound. The BEAM VM (Erlang's virtual machine) is highly optimized for this kind of work. Pattern matching makes our code expressive, and the pipeline operator turns our solution into a clear sequence of transformations. While we make multiple passes over the data, the clarity of the code makes it a worthwhile trade-off.
Haskell: Elegant Laziness
isSafe :: Report -> Bool
isSafe (Report xs)
| length xs < 2 = False
| otherwise =
(all (> 0) diffs || all (< 0) diffs)
&& all isValidDiff diffs
where
diffs = differences xs
Haskell's lazy evaluation transforms how we think about data processing. What looks like multiple list operations often compiles down to a single pass through the data. The differences
function doesn't eagerly create a new list; it describes a computation that will happen only when needed. GHC's optimization passes, particularly stream fusion, often eliminate intermediate lists entirely. It's a beautiful example of how Haskell lets us write clear, mathematical solutions while the compiler handles the heavy lifting of making it efficient.
Performance Characteristics
Let's talk about how these different approaches play out in practice. It's a fascinating study in trade-offs and priorities.
When it comes to memory usage, each language takes its own path. Rust gives us precise control – we know exactly when and where we're allocating memory for our differences vector. Go keeps things minimal, using only the memory it absolutely needs during processing. Elixir creates multiple immutable lists, but the BEAM VM is built to handle this efficiently. Haskell's lazy evaluation means we might not need those intermediate lists at all – they often exist only in the abstract, optimized away by GHC's fusion rules.
Time complexity tells an interesting story too. While all our implementations are technically O(n), they get there differently. Go and Rust take the direct route with a single pass and early returns when possible. Elixir makes multiple passes, but gains clarity and maintainability in return. Haskell's approach looks like multiple passes on paper, but lazy evaluation and fusion often collapse these into a single efficient traversal.
The way each language handles memory allocation reflects its broader philosophy. Rust gives us a single, controlled allocation that we can reason about precisely. Go avoids allocations during processing entirely, keeping things predictable. Elixir creates multiple immutable structures, prioritizing code clarity over memory efficiency. Haskell's lazy evaluation means allocations often disappear entirely after optimization.
When it comes to optimization opportunities, each language plays to its strengths. Rust leverages powerful optimization pipeline and zero-cost abstractions. Go's simplicity makes it easy for the compiler to perform escape analysis and function inlining. Elixir's pattern matching and pipeline operators give the BEAM VM clear optimization boundaries. Haskell's purity and lazy evaluation enable aggressive optimizations like fusion and deforestation.
Trade-offs and Considerations
What fascinates me most is how each language's philosophy guides us toward different implementation choices, even for this seemingly straightforward problem.
Take Rust, for instance. Its obsession with zero-cost abstractions isn't just a technical detail – it's a mindset. When writing the Rust solution, I found myself naturally thinking about memory control and efficiency. The windows()
iterator isn't just convenient; it's a perfect example of Rust's promise that abstractions shouldn't cost us at runtime. You get the ergonomics of high-level iterators with the performance of manual pointer manipulation.
Go takes a different stance. Its implementation might look almost too simple at first glance, but that's exactly the point. By using a straightforward slice and a single pass through the data, it achieves something remarkable: code that's both easy to understand and performant. There's no clever optimization here, just clean, predictable performance that's easy to reason about.
Elixir's approach tells yet another story. When I wrote the Elixir solution, I wasn't thinking about memory allocations or iteration counts. Instead, I was focused on transforming data through clear, composable steps. Yes, this creates multiple intermediate lists, but it gives us something valuable in return: code that reads like a specification. The pipeline operator (|>
) turns our solution into a story of data transformation.
And then there's Haskell, quietly performing its lazy evaluation magic. At first glance, its solution might look inefficient – multiple passes over the list? But thanks to GHC's optimization capabilities, particularly fusion rules, those intermediate lists often disappear entirely. It's a beautiful example of how Haskell lets us write clear, mathematical solutions while the compiler handles the heavy lifting of making it efficient.
Conclusion: Beyond the Problem
What started as a simple sequence validation problem turned into an exploration of how different programming paradigms influence our problem-solving approaches. The "Problem Dampener" feature, in particular, revealed how each language's strengths can lead to distinct yet equally valid solutions.
The most valuable lesson wasn't about finding the "best" solution, but understanding how different tools shape our thinking and approach to problem-solving. Each language offered unique insights that could be applied across the entire spectrum of programming challenges.
All code examples are available in my Advent of Code 2024 repository, including complete implementations, tests, and benchmarks.
Happy coding, and see you tomorrow for Day 3!