Memory Protections

Overview

Modern binaries ship with multiple layers of memory protection that make exploitation significantly harder than the classic "overwrite return address, jump to shellcode" approach. Understanding what each protection does and how it can be bypassed is essential for exploit development.

Always check protections before writing an exploit — the protection profile dictates which techniques are viable.

Checking Protections

checksec

# checksec
# https://github.com/slimm609/checksec.sh
checksec --file=./binary

Example output:

RELRO           STACK CANARY      NX            PIE             RPATH      RUNPATH  Symbols   FORTIFY
Full RELRO      Canary found      NX enabled    PIE enabled     No RPATH   No RUNPATH  No Symbols  Yes

pwntools checksec

# pwntools
# https://github.com/Gallopsled/pwntools
from pwn import *
elf = ELF('./binary')
print(elf.checksec())

Output:

RELRO:      Full RELRO
Stack:      Canary found
NX:         NX enabled
PIE:        PIE enabled
Stripped:   No

Protection Summary

Protection What It Prevents Bypass Techniques
NX (No-Execute) Executing code on stack/heap ROP, ret2libc, ret2plt
Stack Canary Stack buffer overflow Canary leak, format string leak, brute force (forking)
ASLR Predicting addresses Information leak, partial overwrite, brute force
PIE Predicting binary addresses Information leak (binary base), partial overwrite
Full RELRO GOT overwrite Target other writable areas (__malloc_hook, stack)
FORTIFY_SOURCE Unsafe function calls Does not catch all overflows

NX (No-Execute) / DEP

NX marks the stack and heap as non-executable. Code injected into these regions cannot be executed directly.

How it works: The CPU enforces page-level permissions — memory pages are marked either writable or executable, never both (W^X policy).

Compile flags:

# Disable NX (enable executable stack) — for testing only
gcc -z execstack -o vuln_nx_off vuln.c

# Enable NX (default on modern GCC)
gcc -o vuln_nx_on vuln.c

Bypass: Use code-reuse techniques — return to existing executable code in the binary or loaded libraries:

  • ret2libc — return to system() or execve() in libc
  • ROP — chain small instruction sequences ending in ret (gadgets)
  • ret2plt — return to PLT stubs to call library functions

Stack Canary (Stack Smashing Protector)

A random value placed between local variables and the saved return address. If a buffer overflow overwrites the canary, the program detects the corruption and calls __stack_chk_fail() before returning.

How it works: The compiler inserts a canary check in the function epilogue. On x86-64 Linux, the canary is loaded from fs:0x28 (thread-local storage) and always starts with a null byte to prevent string-based leaks.

Compile flags:

# Disable stack canary
gcc -fno-stack-protector -o vuln_no_canary vuln.c

# Enable for functions with buffers (default)
gcc -fstack-protector -o vuln_canary vuln.c

# Enable for all functions
gcc -fstack-protector-all -o vuln_canary_all vuln.c

# Enable for functions with certain heuristics
gcc -fstack-protector-strong -o vuln_canary_strong vuln.c

Bypass techniques:

  • Leak the canary — use a format string bug or other information disclosure to read the canary value, then include the correct canary in the overflow payload
  • Brute force (forking servers) — if the server forks per connection, all child processes share the same canary; brute-force one byte at a time (256 attempts per byte, 7 bytes total for x86-64 since the first byte is always \x00)
  • Overwrite without hitting canary — if the target is before the canary (e.g., a function pointer in a struct), the canary is never checked

ASLR (Address Space Layout Randomization)

Randomizes the base addresses of the stack, heap, and shared libraries on each execution. Prevents hardcoding addresses in exploits.

Scope: ASLR is a kernel-level feature — it applies to all processes system-wide.

# Check ASLR status (2 = full, 1 = partial, 0 = off)
cat /proc/sys/kernel/randomize_va_space

# Disable ASLR temporarily (requires root)
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space

# Re-enable ASLR
echo 2 | sudo tee /proc/sys/kernel/randomize_va_space

# Disable ASLR for a single process
setarch $(uname -m) -R ./binary

What ASLR randomizes:

Region Randomized?
Stack Yes
Heap Yes
Shared libraries (libc) Yes
Binary text (with PIE) Yes
Binary text (no PIE) No — fixed at 0x400000

Bypass techniques:

  • Information leak — leak a runtime address (libc, stack, or binary), calculate the base, derive all other addresses
  • Partial overwrite — overwrite only the low bytes of an address (not randomized), e.g., the low 12 bits of a page-aligned address are always the same
  • Brute force — on 32-bit systems, ASLR entropy is low enough (~8-11 bits for libc) that brute force is practical

PIE (Position-Independent Executable)

PIE randomizes the base address of the binary itself (text, data, BSS, GOT, PLT). Without PIE, the binary always loads at a fixed address (typically 0x400000 on x86-64), making ROP gadgets within the binary predictable.

Compile flags:

# Enable PIE (default on modern GCC/distros)
gcc -pie -o vuln_pie vuln.c

# Disable PIE
gcc -no-pie -o vuln_nopie vuln.c

Bypass: Leak any address within the binary to calculate the base address. Common leak sources:

  • Format string vulnerability reading return addresses on the stack
  • Information disclosure bug revealing a function pointer
  • Partial overwrite of the low bytes (12 bits of page offset are not randomized)

RELRO (Relocation Read-Only)

Controls whether the GOT (Global Offset Table) is writable after program initialization.

Partial RELRO (default): GOT is writable — attackers can overwrite GOT entries to redirect library function calls.

Full RELRO: The dynamic linker resolves all symbols at load time and marks the GOT read-only. GOT overwrite attacks fail.

Compile flags:

# Partial RELRO (default)
gcc -Wl,-z,relro -o vuln_partial vuln.c

# Full RELRO
gcc -Wl,-z,relro,-z,now -o vuln_full vuln.c

# No RELRO
gcc -Wl,-z,norelro -o vuln_norelro vuln.c

Bypass Full RELRO: Target writable memory other than the GOT:

  • __malloc_hook / __free_hook (removed in glibc 2.34+)
  • Stack return addresses
  • Function pointers in application data

FORTIFY_SOURCE

Compile-time and runtime checks on unsafe functions like strcpy, sprintf, memcpy. When the compiler can determine the buffer size at compile time, it replaces unsafe calls with checked variants (__strcpy_chk, __memcpy_chk) that abort on overflow.

# Enable FORTIFY (level 1 — compile-time checks)
gcc -D_FORTIFY_SOURCE=1 -O2 -o vuln_fortify vuln.c

# Enable FORTIFY (level 2 — additional runtime checks)
gcc -D_FORTIFY_SOURCE=2 -O2 -o vuln_fortify2 vuln.c

# Requires optimization (-O1 or higher) to be effective

FORTIFY does not catch all overflows — only those where the compiler can determine the destination buffer size. Hand-rolled loops or dynamically sized buffers are not protected.

Compilation Cheatsheet

# All protections disabled (maximum vulnerability — for testing only)
gcc -fno-stack-protector -z execstack -no-pie -Wl,-z,norelro \
    -D_FORTIFY_SOURCE=0 -o vuln_all_off vuln.c

# All protections enabled
gcc -fstack-protector-all -pie -Wl,-z,relro,-z,now \
    -D_FORTIFY_SOURCE=2 -O2 -o vuln_all_on vuln.c

# Common CTF setup (NX on, no canary, no PIE, partial RELRO)
gcc -fno-stack-protector -no-pie -o vuln_ctf vuln.c

Verify with checksec after compilation:

# checksec
# https://github.com/slimm609/checksec.sh
checksec --file=./vuln_all_off
checksec --file=./vuln_all_on

References

Tools

Official Documentation

MITRE ATT&CK