Shannon Entropy and Password Strength
Information entropy (in Shannons sense) 1 measures the uncertainty or randomness in a password distribution. Formally, if passwords are drawn from a set of size N with uniform probability, the entropy is:
\({\Large
H = \log_2 N^L = L \log_2 N = L \frac{\log N}{\log 2}
}
\)
where N is the number of possible symbols and L is the number of symbols in the password. To find the length, L, needed to achieve a desired strength H, with a password drawn randomly from a set of N symbols, one computes:
\({\Large
L = \bigl \lceil \frac{H}{\log_2 N} \bigr \rceil
}
\)
In practical terms, „bits of entropy“ corresponds to the base-2 logarithm of the number of guesses needed to brute-force the password space. For example, a password with 42 bits of entropy would require on the order of 2^42 (~4.4 trillion) attempts to guess in the worst case. Each additional bit of entropy doubles the search space, making attacks exponentially harder. 2
Human-chosen passwords typically have far less entropy than their length might suggest. Users often create passwords that are predictable (common words, patterns, etc.), leading to effective entropy much lower than the theoretical maximum. Even an 8-character password chosen from 94 printable characters has at most ~52 bits of entropy if truly random, but in practice human choices yield far less. In recognition of this, standards like NIST 800-63B caution that estimating entropy for user passwords is difficult and often inaccurate. 3 NIST instead emphasizes password length and blacklist checks (disallowing common passwords) as more reliable measures, since users choices deviate from uniform randomness. 4
Machine-generated passwords can achieve much higher entropy. A cryptographically random password of length 32 using the full ASCII range (~95 characters) has ~32×6.5 ≈ 208 bits of entropy, and lengthening to 43 characters of that set yields ~279 bits (exceeding 256 bits). Passphrases generated by diceware or large word lists can also reach 60–100+ bits if sufficiently long (e.g. 6 random words ~ 78 bits). A true 256-bit password implies ~2^256 possible combinations – an astronomically large space (10^77 possibilities). For comparison, 128-bit entropy (2^128 ≈ 3.4×10^38 possibilities) is already considered beyond brute-force for classical computing (more on this below). In effect, a 256-bit random secret is on par with keys used in top-tier cryptosystems (e.g. AES-256 keys have 256-bit entropy by design).
It’s important to note that entropy is a property of the password generation process, not the particular password. If a user truly picks a password uniformly at random from 2^256 possibilities, it has 256 bits of entropy – regardless of how “random” or complex the resulting string may look. In practice, ensuring such high entropy requires using a strong source of randomness (since humans cannot reliably produce or even remember 256 bits of randomness on their own).5
True Random Number Generators (TRNGs) and Entropy Sources
Cryptographically secure random number generators are capable of producing 256-bit strings with nearly “full” entropy (i.e. each output bit is unpredictable). A hardware random number generator (HRNG) – also called a true RNG (TRNG) or non-deterministic RNG – leverages physical processes to generate randomness. 6 Common entropy sources include 7:
- Thermal noise (e.g. electronic circuit noise in diodes or resistors)
- Jitter/chaos in electronics (e.g. timing jitter in oscillators, metastability of flip-flops)
- Quantum phenomena (e.g. radioactive nuclear decay, or photon behavior like beam-splitter outcomes leveraging the quantum photoelectric effect)
- Other analog processes (atmospheric noise, Brownian motion, chaotic laser signals, etc.)
Because raw physical processes often have biases or less than perfect entropy, real TRNG designs include components to condition and test the output. A typical TRNG contains:
- an analog noise source,
- a digitizer and a randomness extractor that “whitens” or compresses the raw bits to eliminate bias, and
- continuous health tests to ensure the source hasn’t failed or degraded.
The goal is to achieve “full entropy” outputs – essentially statistically uniform random bits.
Standards for entropy quality: Certifying a TRNG often involves demonstrating a high min-entropy per bit. For example, the German BSI’s AIS 31 standard (functionality class PTG.3) requires that after conditioning, each output bit has a min-entropy of ≥0.98 – meaning the output is very close to ideal randomness. 8 In one illustration, to produce a 256-bit output with such high confidence, a larger number of raw bits (e.g. 327 bits in one example) might be processed to guarantee 256 bits of entropy with an extremely small failure probability. NIST’s SP 800-90B similarly provides statistical tests and entropy estimation techniques to validate hardware entropy sources, ensuring that a claimed 256-bit output indeed contains near-256 bits of unpredictable information. 9
Real-world deployments: Modern CPUs and security modules incorporate TRNGs. Intel’s CPUs since the “Ivy Bridge” generation include a hardware RNG that uses thermal noise in a metastable circuit to generate random bits. 10 The raw bits are run through a conditioner (a SHA-256 or AES-based entropy extractor) which condenses, for instance, 512 raw bits into 256 output bits. Because the entropy source is designed to provide at least 0.5 bits of entropy per raw bit, the 256-bit conditioned output is “almost completely random” with only a tiny safety margin shortfall. In fact, at startup the Intel RNG collects a very large raw sample (e.g. 32,768 bits) to seed its DRBG, ensuring a robust 256-bit seed before producing outputs. Many other systems (hardware security modules, TPMs, and dedicated devices) use combinations of quantum effects and classical noise to generate true random 256-bit keys, often certified under standards like NIST (FIPS 140-2/3) or Common Criteria.
In summary, it is feasible to obtain ~256 bits of Shannon entropy from physical generators – and these outputs can serve as extremely strong passwords or cryptographic keys. Properly tested TRNGs give nearly uniform 256-bit strings, meaning an attacker has no shortcut better than guessing from 2^256 possibilities.
Modern Password-Hashing Schemes (Argon2id, scrypt, bcrypt)
Password hashing schemes (also known as key derivation functions when used for keys) are designed to safely store passwords by one-way hashing them and to make brute-force cracking difficult even if an attacker obtains the hash. Classical cryptographic hash functions (SHA-256, etc.) are designed to be fast, which is undesirable for password security because an attacker can test billions of guesses per second on specialized hardware. 11 Modern password-hashing functions introduce work factors – computational and memory costs – to slow down each guess attempt.
Argon2id: This function (winner of the Password Hashing Competition 2015) is a state-of-the-art memory-hard hashing scheme. It aims to use significant memory and CPU so that attacks on GPUs/ASICs gain little advantage. Argon2 has three flavors: Argon2d (data-dependent memory access, fast but susceptible to side-channel leaks), Argon2i (data-independent access, resistant to side channels, slightly slower), and Argon2id (a hybrid that is side-channel resistant and memory-hard, recommended in most cases). The design goal is to maximize trade-off resistance: any attacker who tries to use less memory or parallelism than the hash function will incur a disproportionate time penalty. Internally, Argon2id fills a large memory buffer with pseudo-random data dependent on the password and salt, then performs numerous CPU-intensive operations (based on the Blake2b cipher) on the memory blocks. Notably, Argon2 introduces arithmetic operations (64-bit multiplications, etc.) that increase circuit depth for hardware implementations – making ASIC implementations slower, without slowing CPUs significantly.
Parameters and guidance: Argon2id’s security can be tuned via parameters: memory size (m), number of iterations (t), and degree of parallelism (p lanes/threads). The IETF’s RFC 910612 (2021) standardizes Argon2 and gives two example “safe” parameter sets:
- First recommendation: Argon2id, t=1 iteration, p=4 lanes, m=2^21 (~2 GiB memory), with a 128-bit salt and 256-bit hash output. This aims to be a “uniformly safe” default for most applications, using a very large memory footprint to thwart parallel cracking. (Indeed, 2 GiB per hash attempt will strain even GPUs and ASICs.)
- Second recommendation: If 2 GiB is untenable, use Argon2id with t=3, p=4, m=2^16 (64 MiB). This uses more CPU (3 iterations) but much less RAM, still providing strong resistance.
For example, Argon2id can be configured such that hashing a password takes ~0.5 seconds and uses, say, 1–4 GiB of RAM on a server. This would significantly slow an attacker who tries to batch-test billions of passwords, while a legitimate user only incurs a half-second delay during login. Argon2 has been widely recognized; it’s used in applications and libraries (libsodium, password managers) and is recommended by OWASP and others. As of 2021 it’s an Internet standard.
scrypt: Scrypt (published by Colin Percival in 2009, RFC 7914) was one of the earlier memory-hard password derivation functions. It was designed to force computations to use a large memory buffer, thereby reducing the efficiency of GPU/FPGA attacks where memory per core is limited. Scrypt operates by filling an array with pseudorandom data (derived from the password via SHA-256/Blake), then repeatedly reading and writing it in a pseudo-random access pattern. This process is both CPU-intensive and memory-intensive. The memory usage can be tuned via a parameter N (number of 128KB blocks, typically) such that memory required = 128 * N * r * p bytes 13. For instance, N=2^14, r=8 uses ~16 MB of RAM and is often used for interactive logins; larger N (like 2^20 for ~1 GB) can be used for more security if latency is less a concern. Scrypt is proven to be optimally memory-hard, meaning any attempt to compute it with less memory than intended results in a proportionally larger time cost. This property makes it “ASIC-resistant” to a large degree. Scrypt’s original application was in securing Unix backups (Tarsnap) and it later became famous for its use in certain cryptocurrencies (as a proof-of-work algorithm in Litecoin), leveraging the memory hardness to reduce mining advantage on specialized hardware.
bcrypt: Bcrypt (developed by N. Provos and D. Mazières, 1999) is an older but still widely used password hash. It’s based on the Blowfish block cipher – effectively it runs the Blowfish key schedule repeatedly using the password as the key. Bcrypt introduced the concept of a configurable work factor (cost), which determines how many rounds of the Blowfish-derived hashing are applied. For example, cost 10 means 2^10 = 1024 rounds. Bcrypt is not as memory-intensive as scrypt or Argon2, but it does involve a small memory table (the Blowfish S-boxes, totaling 4 KB) that must be repeatedly accessed. This design made bcrypt somewhat less efficient on GPUs: each GPU thread has limited local cache (often ~1 KB), so it can’t hold all S-boxes, causing memory bottlenecks when many threads try to use global memory. Thus, while bcrypt’s primary strength is in its CPU cost tunability, it also incidentally consumes enough memory (a few kilobytes per hash) to hamper massively parallel GPU cracking to a degree. Bcrypt automatically handles salt (16-byte salt) and outputs a 192-bit hash (often shown as a 60-character string). Its adoption is ubiquitous in web frameworks and OS systems, though it has limits (e.g. it truncates input passwords to 72 bytes and cannot produce arbitrarily long outputs).
Common goal – mitigate low-entropy passwords: All these schemes are designed under the assumption that user passwords have limited entropy (typically 20–60 bits at best). An offline attacker armed with modern hardware might attempt billions or trillions of candidate passwords per second if the hash were a fast algorithm like SHA-1/MD5. This is why secure systems salt and hash passwords with a slow function. 14 The salt (a random per-user value) prevents rainbow table attacks (precomputed hash dictionaries) and ensures identical passwords hash differently. The computational cost means that even if an attacker obtains the hash file, each guess is expensive, ideally to the point of being infeasible.
NIST’s digital identity guidelines sum it up: stored passwords “SHALL be salted and hashed using a suitable one-way key derivation function” – preferably a memory-hard function – so that “each password guessing trial by an attacker… is expensive and the cost of an attack is high or prohibitive.” Memory-hard functions (like the above) are explicitly recommended because they increase an attacker’s workload significantly. In practice, this means that a user password with e.g. 40 bits of entropy might be effectively strengthened to require far more than 2^40 operations to crack if the hashing function itself is costly to compute.
Known attacks and considerations: None of these functions are “unbroken” in the sense of cryptographic preimage attacks – their security rests on brute-force resistance. However, attackers continually optimize hardware and techniques to speed up password cracking:
- For bcrypt, there have been advances in FPGA/ASIC implementations, but the cost and its 4KB memory requirement still ensure it’s much slower on specialized hardware than, say, an SHA-256 hash. One must use a sufficiently high cost factor to account for today’s faster CPUs; e.g. cost 12 (≈2^12 rounds) or more is commonly recommended in 2025.
- For scrypt, its memory hardness has held up, though one must choose N such that the memory (and time) is large enough. Too low parameters (as sometimes seen in misconfigured systems) can undercut its security. Research has shown scrypt’s memory-hard design is close to optimal, meaning attackers can’t do much better than straightforwardly using the required memory. Still, hardware with massive memory bandwidth (e.g. high-end GPUs) can parallelize some scrypt computations, so increasing N (or using Argon2) is prudent as hardware improves.
- For Argon2, since it’s newer, ongoing analysis looks at side-channel resistance and trade-off attacks. Argon2i/Argon2id are designed to avoid timing channels (using data-independent memory access for a portion of execution) and to thwart time-memory tradeoff attacks. The Argon2d variant (fully data-dependent) is not safe against attackers who can monitor memory access patterns, but Argon2id addresses this by a hybrid approach. Thus Argon2id is generally recommended for password hashing, providing both defense against external side-channel attackers and strong resistance to GPU/ASIC cracking. No practical “break” of Argon2’s core algorithm is known; attacks mainly involve finding the cheapest way to meet its memory & time requirements, but the RFC recommended settings keep the bar high (e.g. requiring gigabytes of memory).
- All modern schemes incorporate salts (typically 128 bits is standard) to prevent any reuse of work across hashes. Some systems also include a secret “pepper” (an application secret key) to further protect against hash file compromise – though that’s an extra precaution.
Standardization and usage: Argon2id and scrypt have been published as RFC 9106 and RFC 7914 respectively, and bcrypt (though older and not an RFC) remains in FIPS 2020 compliance lists as an allowed password hash for certain applications. Many password storage guidelines (OWASP, NIST, IEEE SAC) now recommend using a memory-hard KDF (argon2id or scrypt) with appropriate parameters, or at minimum PBKDF2 (which is CPU-hard but not memory-hard) with a very high iteration count, if the former are not available. The bottom line is that these hashes are tuned to the reality that user passwords are often the weakest link. By expending server computation (which is done infrequently per user login) we drastically reduce an attacker’s ability to test passwords in bulk.
Brute-Force Search: Classical vs. Quantum Feasibility
If users truly choose 256-bit random passwords, the brute-force search space is 2^256 – an astronomically large number (~1.16 × 10^77 possibilities). To understand the security of such a password, we must consider the capabilities of both classical computers and quantum computers (present and future). The table below compares the scale of brute-force attacks against different key sizes:
| Password Entropy | Classical Brute-Force Work | Quantum Brute-Force Work (Grover’s alg.) |
|---|---|---|
| 128-bit (2^128 possibilities) | ~2^128 trials needed. Infeasible: even at 1 billion guesses/sec, it would take ~10^11 years on average. | ~2^64 quantum steps. Still infeasible: effectively 64-bit security, which is crackable only with unrealistically large quantum capacity. |
| 256-bit (2^256 possibilities) | ~2^256 trials needed. Beyond astronomical: far exceeds the age of the universe by many orders of magnitude. Even with hypothetical future tech, 2^256 is out of reach. | ~2^128 quantum steps (due to √N speedup). Computationally intractable: equivalent to 128-bit classical security, which is considered unbreakable in practice. |
Classical brute-force (today): No existing classical computer system – nor any foreseeable classical system – can enumerate 2^128 or 2^256 possibilities. As a reference point, 2^128 ≈ 3.4×10^38. If a data-center of machines could test even 10^12 (1 trillion) hashes per second (which is far beyond current capabilities for a complex hash like Argon2, though simple hashes on specialized hardware can approach this), it would take ~3.4×10^26 seconds to go through half the 128-bit space on average. That is about 1.08×10^19 years, which is a billion times the age of the universe – effectively impossible. A famed estimate using physics limits, Landauer’s principle:
\({\Large
E_{bit} = k_B T \ln 2
}
\)
found that the energy required to flip 2^256 bits of state is astronomically large.
In fact, 128-bit keys are considered secure against exhaustive search not just practically, but fundamentally due to physical limits. By extension, 256-bit keys are overkill secure – even if we imagine future computers 10^9 times faster, the time to exhaust 2^256 would still be fantastically out of reach. NIST and other agencies project that 128-bit security will be sufficient well beyond 2030, and recommend 256-bit keys only for ensuring security in the distant future or against unlikely breakthroughs. 15
Anticipated hardware progress (10–20 years): We expect classical computing power to continue growing, but only linearly or exponentially on human scales, not enough to bridge such gaps. Even if computing speeds were a trillion times faster in 20 years, cracking 2^256 would remain a fantasy. (By comparison, the difference between a 40-bit password and a 80-bit password is a factor of 2^40 ≈ 1 trillion – a gap we managed to overcome over decades – but the difference between 128-bit and 256-bit is 2^128, an utterly unreachable factor.) Contemporary attackers use specialized hardware like GPUs and FPGAs to accelerate brute force. For example, a high-end GPU can compute SHA-256 around 10^8–10^9 hashes per second, meaning a 64-bit key (2^64 ~1.8×10^19) might be cracked by a large botnet or mining farm given months or years. Indeed, 64-bit is now considered borderline – one discussion notes 2^64 operations is “pretty much on the border of being cracked by general computers”, and dedicated bitcoin-mining ASICs (which perform 10^14 SHA-256 hashes/sec collectively across a network) could exhaust 2^64 space in mere seconds. However, 128-bit keys (2^128) are 2^64 times harder than 64-bit – a factor of 18 quintillion. There are no foreseeable improvements that multiply computing power by 10^19. Thus 128-bit remains safe, and 256-bit provides a margin so large it’s practically irrelevant to increase further for classical threats.
Grover’s algorithm (quantum brute-force): Quantum computing introduces the potential for faster brute-force search via Grover’s algorithm. Grover’s algorithm can find a target in an unsorted search space of size N in √N steps (quadratic speedup). If a quantum computer could be applied to password hashing as an oracle, a 256-bit key could be cracked in on the order of 2^128 operations, and a 128-bit key in 2^64 operations. This effectively halves the entropy (256→128, 128→64 bits of security). It’s important to note this is the provable optimal speedup for unstructured search – it’s been proven that no quantum algorithm can do better than Grover’s √N in the generic case. So we do not expect any quantum attack to brute-force 256-bit keys in fewer than ~2^128 steps without exploiting some structure in the hashing algorithm.
Quantum feasibility: The theoretical speedup still leaves us with enormous numbers. 2^128 ~ 3.4×10^38 steps – how might that translate to actual quantum computing time? In theory, if one had a quantum computer that could perform say 10^6 Grover iterations per second (which would itself require a huge clock speed and thousands of error-corrected qubits working in parallel), it would take ~10^32 seconds to run 2^128 steps – that is 3×10^24 years. Even optimistic projections of quantum tech do not come anywhere remotely close to such capabilities. In fact, to brute force a 128-bit key (which requires ~2^64 ≈ 1.8×10^19 steps via Grover), one estimate notes it’s still “ridiculously large” in terms of energy and time – “all the world’s resources for 10 years” wouldn’t be enough for 2^64 quantum operations in that scenario. This highlights that Grover’s algorithm, while theoretically halving the exponent, does not make brute-forcing 128+ bit keys practical with any known or envisioned quantum technology.
In practice, current quantum computers have on the order of a few hundred noisy qubits. Running Grover at scale would require thousands of logical (error-corrected) qubits and billions of quantum gate operations for even moderate key sizes. We are many breakthroughs away from a quantum computer that could realistically brute-force a 64-bit space, let alone 128-bit. Most quantum computing efforts are focused on breaking RSA/ECC (via Shor’s algorithm) which is thought to need on the order of thousands of logical qubits; even those optimistic about quantum tech suggest a timeframe of 10-20 years for a quantum computer that might break RSA-2048. Symmetric ciphers like AES-256 are much more resilient: AES-256 under Grover requires 2^128 steps, which is generally believed to be beyond reach for even quantum computers we might plausibly see in the mid-century. Indeed, the US National Academies and NIST have treated AES-256 as safe for the foreseeable future in a post-quantum world (it offers 128-bit post-quantum security, which is considered sufficient). NIST’s guidance is to double symmetric key lengths to counter Grover’s algorithm – hence 256-bit keys to protect against future quantum adversaries, as 256-bit becomes ~128-bit effective.
To summarize the brute-force picture:
- A 256-bit truly random password is uncrackable by brute force using classical computing; no attacker can attempt 2^256 hashes, not now and not in any realistic future scenario.
- Even in a hypothetical future where large quantum computers exist, that 256-bit password would still retain an effective 128-bit security level, which is regarded as unbreakable by all current estimates.
- Thus, from a brute-force standpoint, a properly generated 256-bit secret is overkill – it far exceeds the security margin needed to resist guessing, providing a comfortable cushion against both massive classical attacks and conceivable quantum attacks.
Security in Depth & Practical Considerations for 256-bit Passwords
Given the above, if every user had a truly random 256-bit password, one might ask: Do we still need slow, memory-hard password hashing (like Argon2id)? The situation changes because the threat model changes: the traditional threat of an offline attacker guessing billions of passwords becomes irrelevant if each password is effectively one-in-10^77. However, security best practices encompass more than brute-force resistance alone. Let’s consider arguments on both sides:
Arguments for retiring slow hashing (in the 256-bit password scenario)
- Brute-force mitigation is moot: If passwords have ~256 bits of entropy, an attacker cannot brute-force even a single hash with any technology. The whole point of Argon2, bcrypt, etc. – to make guessing expensive – is not needed when even fast hashing is unguessable. In other words, the password is as strong as a private crypto key; using a simple hash (SHA-256) would already yield a practically uncrackable hash value. The extra computational cost of Argon2 doesn’t meaningfully increase security because 2^256 vs 2^256×(cost factor) is negligible; the search space is already infeasible. As one expert put it, if you ensure the passphrase has as much entropy as a typical cryptographic key, “you don’t need a PBKDF, only a plain KDF.” In a controlled environment (all passwords verifiably 256-bit random), Argon2id’s memory hardness is overkill.
- Performance and operational efficiency: Slow hashing algorithms consume CPU time and memory. Argon2id with large settings can notably increase authentication server load and latency. If the security gain is effectively zero (given the enormous password strength), this is wasted resource. By switching to a fast hash or a single-round KDF (like a keyed hash or HKDF), authentication can be instantaneous and scalable to more requests per second. This can be important in high-scale systems or constrained environments. For instance, an IoT network where each device holds a 256-bit key might prefer a quick hash check rather than using 100ms of CPU for Argon2 each time – especially if devices authenticate millions of times per day.
- Simplicity and reduced complexity: Using a standard cryptographic hash (or a straightforward KDF without large memory usage) is simpler to implement and less likely to introduce misconfiguration or new vulnerabilities. There have been cases where misconfigured bcrypt/scrypt parameters or bugs in Argon2 integration caused issues; by avoiding these and treating the password essentially like a cryptographic key, one reduces points of failure. Essentially, it becomes a standard key comparison problem, which is well-understood. (It’s still wise to use a salt and a hash – but a “fast” one would do – mainly to avoid storing the raw secret).
- No benefit from attacker’s perspective: An attacker who steals the password database would obtain only salted hashes. But if those hashes correspond to 256-bit secrets, the attacker can’t crack them regardless of Argon2 or not. Even a fast hash like SHA-512 would not be invertible within the lifetime of the universe. Using Argon2 with massive resource costs might slow down one hash computation by, say, 1000× or even 1e6×, but when the baseline is 10^77 possibilities, it doesn’t appreciably change the attacker’s success probability (which is effectively zero in both cases without some other weakness). In short, the risk of offline guessing is already mitigated by the password’s strength, making additional hashing cost diminishing returns.
- Edge-case: deliberately using passwords as keys elsewhere: If the 256-bit password is also used in other contexts (for example, as an encryption key for a user’s data), storing it directly (or a reversible-encrypted form) might be necessary for the service to function (though this is generally not a good design). In such a special case, hashing it would defeat the purpose. (This is a niche argument; typically one should separate passwords and encryption keys, but it’s worth noting scenarios like key escrow where hashing isn’t desired.)
In summary, one could argue that if you treat passwords as high-entropy secrets akin to cryptographic keys, you can forego expensive password hashing and use a simpler one-way function. The system would then rely on the strength of the keys themselves for security, much like systems that use 256-bit API tokens or cryptographic keys which are often stored hashed only with a fast hash (or even just stored encrypted).
Arguments against retiring slow hashing (even with 256-bit passwords)
- Defense in depth & unforeseen weaknesses: Good security design assumes things can go wrong. What if not all users actually use a full 256-bit random password? All it takes is one user choosing a weaker password (maybe a 40-bit entropy passphrase) in violation of policy – if the system had removed Argon2 hashing, that one account becomes dramatically easier to crack. With Argon2/scrypt in place, that user’s hash would still be somewhat protected by the computational cost. In practice, enforcing that every single password has 256-bit entropy might be unrealistic; slow hashing provides a safety net for any deviation. As Maarten B. noted, “in general you cannot easily ensure the amount of entropy of a passphrase” across all users. Thus, keeping the key derivation function strong is a hedge against human or process failures.
- Limiting damage in a breach: Even if passwords are truly random, storing them in hashed form (regardless of speed) is critical. If an attacker steals a database of unsalted or plaintext 256-bit passwords, they immediately have the keys to impersonate users (or to try them on other services, etc.). Hashing (with salt) ensures that a breach does not directly hand over the secret – the attacker still has to perform a preimage attack (which is effectively impossible) to recover the original password. This is a fundamental principle: passwords should never be stored in plaintext. Using a fast hash still satisfies this – but using a slow hash provides extra protection in case there’s any flaw in the assumption of high entropy. It’s an additional layer: if, hypothetically, an attacker found a weakness in the random generation (say the 256-bit secrets weren’t uniformly random due to a bug, reducing the true entropy), a memory-hard hash would significantly slow their ability to exploit that. Without it, a fast hash might be more quickly inverted if the entropy was lower than expected. Security-in-depth argues for not removing a protection mechanism unless it’s clearly unnecessary.
- Standard practice & regulatory compliance: It’s industry standard (and often legally required by regulations like GDPR, HIPAA, etc., and various data breach notification laws) to hash and salt stored user credentials. Removing Argon2id in favor of a fast hash might still be within compliance if salted, but storing even salted SHA-256 hashes of passwords might raise red flags in security audits – since it’s known that fast hashes are inadequate for typical passwords. An auditor may not fully trust that “all passwords are 256-bit random”; demonstrating due diligence usually means using state-of-the-art hashing. In essence, there is little downside to continuing using Argon2id (when properly configured) given modern server capabilities, and doing so avoids having to convince stakeholders that it’s safe to use a fast hash in this special case.
- Insider threat and password reuse: Hashing also protects against insider threats – e.g., a rogue employee or DBA who accesses the password database. If the passwords are truly random and unique, an insider might not gain much by knowing them (they likely can’t remember or reuse a 256-bit string). But consider if a user uses that 256-bit password on multiple systems (not recommended, but users do reuse credentials). If one system stored it in plaintext or reversible form, an insider or hacker could use it to breach the user’s other accounts. By hashing, the service ensures even insiders can’t easily get the actual secret. Essentially, not hashing would revert to storing a “key” that could be used elsewhere – a liability. Using a strong hash function (even fast) with salt mitigates this by not exposing the raw secret. Argon2id goes further by making even checking a password non-trivial for an insider (they can’t quickly verify a guess without significant compute). Though with 256-bit, an insider wouldn’t guess – they either have it or not – so the main point is keeping it hashed.
- Minimal performance impact in practice: While Argon2id can be configured to be very slow, it doesn’t have to be extreme if not needed. The cost can be dialed down (e.g., use 64 MB instead of 1 GB, 1 iteration instead of 3) to balance security and performance. If we trust passwords are strong, we could choose relatively mild Argon2 parameters – but still memory-hard – to satisfy best practices with negligible user-visible delay. For example, Argon2id with t=1, m=16 MB, p=1 might only take a few milliseconds – hardly a burden. So why not play it safe? You lose almost nothing by keeping a memory-hard hash (configured sanely) and you gain the assurance that if anything about the password strength assumption is wrong, you’re still protected. As Gilles succinctly put, “use as slow a password hashing function as possible” to accommodate the reality that even “decent” passwords strain human memory, and you want every advantage on the defensive side. There’s no pressing need to deviate from this wisdom.
- Salt and structural benefits: Even if one argues Argon2’s slowness isn’t needed for 256-bit secrets, one should absolutely still use salted one-way hashing at a minimum. If one were to naïvely drop Argon2 and perhaps be tempted to just store the 256-bit secret directly (since it’s “unguessable”), that would be a grave mistake – it violates basic password handling principles. At minimum, using a cryptographic hash with a salt is required to prevent any possibility of attacker doing something like rainbow table attacks or spotting if two users have the same password (even two 256-bit passwords could coincidentally match, or be intentionally the same). Salting and hashing each password (even with SHA-256) prevents an attacker from even attempting any precomputation or reuse of work across accounts. Argon2, PBKDF2, etc., all inherently use salts for each hash. So one should continue that practice. The additional benefit of Argon2id is simply raising the cost per guess, which as argued might not be needed for 256-bit – but given it’s easy to leave in place (just with perhaps lower parameters), it’s a cautious choice.
Key management and usability: Another practical consideration is how users manage 256-bit passwords. These secrets are essentially random 64-character strings (if using hex, or ~43 characters if using base64). No human will memorize that; users will rely on password managers or secure storage. This shifts the problem – the user must protect their password manager master password or key. Often, the master password itself is something the user chooses (which might not be 256-bit!). So in an environment where 256-bit per-account passwords are used, typically it implies the user has one or a few master secrets securing everything (or uses hardware tokens). Those master secrets themselves should be hashed and protected with Argon2/scrypt if stored, because they might not be 256-bit random (users might pick a weaker master passphrase). In enterprise scenarios, 256-bit machine-generated passwords might actually just be API keys or crypto keys handled by software, not by humans – in which case, one might argue for using proper cryptographic storage (HSMs, key vaults) rather than the normal “user password” flow. In many cases, if you truly require that level of security and machine-only generation, you might migrate to using public-key authentication or key exchange protocols (e.g., X.509 client certs, SSH keys, or WebAuthn passkeys) instead of passwords. Those mechanisms don’t involve sending a raw secret for the server to hash at all. But that’s a broader architectural shift.
Conclusion on this issue: If we strictly assume every user password is a 256-bit randomly generated string kept secret, then from a pure brute-force perspective, slow hashing (Argon2id, bcrypt, etc.) provides no added security – a fast hash would be uncrackable too. However, seasoned security engineers would be reluctant to declare Argon2id obsolete. The hashing adds an additional layer of protection with minimal downside, covering scenarios of human error, and aligns with best practices for handling credentials. Furthermore, the environment around the password (user behavior, system integration, future changes) can change, whereas the cost of leaving Argon2 in place is low.
In practice, the safest approach is to continue salting and hashing passwords, even extremely strong ones, but you might tune the parameters knowing that the entropy is very high (for instance, you could use a moderate iteration count since the primary threat – offline guessing – is negligible, but you still want the other benefits of hashing). This gives you the belt-and-suspenders: the password’s inherent strength and the hash function’s protection. As NIST’s guidelines emphasize, other mitigations (like secure hashing, salting, and rate-limiting) “are more effective at preventing modern brute-force attacks” than simply increasing complexity alone – and in our scenario we fortunately have both extreme complexity (entropy) and can still use strong hashing.
- https://en.wikipedia.org/wiki/Entropy_(information_theory) ↩︎
- https://en.wikipedia.org/wiki/Password_strength ↩︎
- https://pages.nist.gov/800-63-3/sp800-63b.html#:~:text=Complexity%20of%20user,password%20length%2C%20is%20presented%20herein ↩︎
- https://pages.nist.gov/800-63-3/sp800-63b.html#:~:text=Users%E2%80%99%20password%20choices%20are%20very,include%20entries%20meeting%20that%20requirement ↩︎
- https://security.stackexchange.com/questions/21143/confused-about-password-entropy ↩︎
- https://en.wikipedia.org/wiki/Hardware_random_number_generator#:~:text=generator%20%28TRNG%29%2C%20non,1 ↩︎
- https://en.wikipedia.org/wiki/Hardware_random_number_generator#:~:text=Many%20natural%20phenomena%20generate%20low,are%20not%20truly%20random%2C%20an ↩︎
- https://csrc.nist.gov/csrc/media/Presentations/2023/overview-of-ais-2031/images-media/session-2-schindler-overview-of-ais-20-31.pdf#:~:text=PTG.2,= 1 – 2 -32 ↩︎
- https://csrc.nist.gov/csrc/media/events/random-bit-generation-workshop-2012/documents/kelsey_800-90b.pdf#:~:text=800,concatenate%2032%20entropy%20source%20outputs ↩︎
- https://www.electronicdesign.com/resources/article/21796238/understanding-intels-ivy-bridge-random-number-generator ↩︎
- https://pages.nist.gov/800-63-3/sp800-63b.html#:~:text=to%20determine%20one%20or%20more,to%20resist%20only%20online%20attacks ↩︎
- https://datatracker.ietf.org/doc/rfc9106/ ↩︎
- https://cryptobook.nakov.com/mac-and-key-derivation/scrypt ↩︎
- https://pages.nist.gov/800-63-3/sp800-63b.html#:~:text=Verifiers%20SHALL%20store%20memorized%20secrets,The%20key ↩︎
- https://security.stackexchange.com/questions/241991/when-could-256-bit-encryption-be-brute-forced ↩︎
