Click Generate to create a pad…
A one-time pad (OTP) is the only encryption method that has been mathematically proven to be completely unbreakable — not just difficult to crack, but impossible, even with unlimited computing power. In 1949, mathematician Claude Shannon proved this in a landmark paper on information theory. The concept is older than that: Gilbert Vernam patented the basic mechanism in 1919, and spy agencies worldwide used physical OTP pads throughout the Cold War.
The idea is simple. You and your correspondent each carry identical copies of a "pad" — a long list of truly random characters that only the two of you possess. To send a secret message, you use the next unused row of the pad to scramble it. The recipient unscrambles it using the same row. Both of you then destroy that row. It is never used again — hence "one-time."
The math: modular addition (clock arithmetic)For letter pads (A–Z), assign each letter a number: A=0, B=1, C=2 … Z=25. Encryption works like a clock — once you pass Z, you wrap around back to A.
To encrypt one letter: Add the plaintext number and the key number. If the result is 26 or more, subtract 26.
Example: Plaintext = H (7), Key = K (10) → 7 + 10 = 17 → R
To decrypt: Subtract the key number from the ciphertext number. If the result is negative, add 26.
Example: Ciphertext = R (17), Key = K (10) → 17 − 10 = 7 → H
For hex pads, the operation is XOR (exclusive-or). XOR works at the binary level: a bit XOR'd with itself is always 0, a bit XOR'd with 0 is unchanged. This makes XOR its own inverse — encrypting and decrypting use the exact same operation. Just XOR with the key to encrypt; XOR with the key again to decrypt.
Why it is theoretically unbreakable: Shannon's perfect secrecyImagine an attacker intercepts your ciphertext but has no copy of the pad. They want to figure out the original message. Here is the crucial insight: for every possible plaintext message of the right length, there exists exactly one key that would produce the observed ciphertext from that message. Since the key was chosen with perfect randomness, every plaintext is equally likely. The ciphertext gives the attacker zero useful information — mathematically, learning the ciphertext does not change the probability of any plaintext at all. Shannon called this perfect secrecy.
This is a far stronger guarantee than any other cipher in existence. Modern ciphers like AES are believed to be unbreakable, but their security rests on unproven mathematical assumptions — that certain problems are computationally hard. The one-time pad's security needs no such assumptions. It is an absolute mathematical proof. No computer, no algorithm, no mathematical breakthrough can ever break a correctly used OTP, because there is no information to extract.
The two-time pad: why reuse is catastrophicPerfect secrecy holds only if each key row is used exactly once. Reusing a row destroys security completely. Here is why:
If C₁ = P₁ XOR K and C₂ = P₂ XOR K, then C₁ XOR C₂ = P₁ XOR P₂. The key cancels out entirely. An attacker who XORs both ciphertexts together gets the XOR of both plaintexts. Because natural language has predictable patterns — common words, letter frequencies, grammatical structures — a technique called crib-dragging (guessing known short phrases and sliding them across the combined output) can recover both original messages in minutes, with no key at all.
This is not a theoretical concern. In the 1940s, the Soviet KGB reused one-time pad pages under wartime production pressure. U.S. and British intelligence launched a secret program called VENONA that exploited exactly this mistake. Over the following decades, VENONA analysts decrypted thousands of Soviet intelligence cables. The program identified dozens of Soviet spies operating in the United States and United Kingdom, including Julius Rosenberg. The KGB's single operational error — using a key twice — eventually unraveled years of intelligence work and cost lives. This is why the rule is absolute: one row, one message, no exceptions.
Step-by-step usagecrypto.getRandomValues(), which draws from your operating system's cryptographic entropy pool — hardware timing events, CPU randomness instructions, and other unpredictable physical sources. A pad produced by flipping coins, rolling dice carelessly, or having a human "pick random numbers" is almost certainly biased and unsafe. Human beings are terrible random number generators — we instinctively avoid long runs of the same character, favor certain numbers, and create subtle patterns we are not even aware of. Any such pattern in the key partially exposes the plaintext.Type a message using letters A–Z. Spaces and punctuation are stripped automatically — the letter-based OTP cipher only operates on the 26-letter alphabet. Then generate a random key or paste one in. When you click Encrypt, the demo shows you every character pairing side by side: the plaintext letter on top, the key letter in the middle, and the resulting ciphertext letter on the bottom. Each column is one letter of encryption. To produce it: convert both letters to numbers (A=0, Z=25), add them, and if the sum reaches 26 wrap it back to 0 — the same way a clock wraps from 12 back to 1. The number you land on is the ciphertext letter. Notice that the output looks completely random even when the input is a recognizable word — that is the point. No pattern from the plaintext survives into the ciphertext as long as the key is random.
Paste the encrypted message and the exact same key that was used to encrypt it — the key must match letter for letter, in the same order. Click Decrypt to reverse the mod-26 addition: for each pair, subtract the key letter's number from the encrypted letter's number. If the result goes below zero, add 26 to wrap it back around. The visualization shows each step so you can verify the arithmetic yourself. One important thing to understand: if you use the wrong key, you will not get an error message. The math will still run and produce output — it will just be meaningless letters. A one-time pad has no way to tell a correct decryption from an incorrect one, because every possible plaintext is equally valid without the key. This is actually part of what makes it unbreakable — an attacker cannot tell when they have found the right answer.
Password strength comes down to one thing: how hard it is to guess. The measure used is entropy, counted in bits. Each additional bit of entropy doubles the number of possible passwords an attacker must try. A password with 40 bits of entropy has about one trillion possible values. One with 80 bits has over a quintillion — a billion times more. One with 128 bits has more possible values than there are atoms in the observable universe.
Entropy is not about whether a password "looks" random to a human. It is purely a count of possibilities. A password of all lowercase letters is weaker than one mixing cases and digits — not because lowercase looks simpler, but because the smaller alphabet means fewer possible passwords of the same length.
How entropy is calculatedThe formula is: entropy (bits) = length × log₂(charset size). The charset is the pool of characters the generator draws from:
A 16-character alphanumeric password has 16 × 5.95 ≈ 95 bits of entropy. Adding symbols gains about 1 bit per character. Adding more length gains about 6 bits per character. This is why length matters more than complexity. Going from 16 to 24 characters adds 48 bits of entropy. Switching from alphanumeric to full symbols on a 16-character password adds only about 10 bits. If you can only do one thing, make the password longer.
Why picking characters randomly is harder than it soundsChoosing characters uniformly at random from a list seems simple, but there is a hidden trap called modulo bias. A computer's random number generator typically produces integers from 0 to 2³²−1 (about 4.3 billion possible values). If your alphabet has 62 characters, dividing 4.3 billion by 62 leaves a remainder of 4. This means the first four characters of the alphabet are very slightly more likely to appear than the others — a tiny but real statistical leak.
This generator eliminates modulo bias entirely using rejection sampling. It generates random values and throws away any that fall in the leftover range — the bit above the largest exact multiple of 62 that fits in a 32-bit number. The remaining values map perfectly and evenly onto the alphabet. Every character has an exactly equal probability. No character is even fractionally favored. This is the same technique used in professional cryptographic libraries like libsodium and OpenSSL.
The crack time estimateThe strength bar and crack time shown beneath your password assume an attacker is trying one trillion guesses per second (10¹²/s). This is a realistic worst-case benchmark for a well-funded attacker with specialized GPU hardware against a weakly hashed password. Modern password-cracking rigs can exceed 100 billion guesses per second against old MD5 hashes. Against a properly designed slow hash like bcrypt or Argon2 — what reputable services use — the real rate drops to thousands or millions per second, making the crack time far longer in practice. The one-trillion benchmark is a deliberately conservative floor.
At 10¹² guesses per second: a 60-bit password takes about 13 days on average. A 80-bit password takes 38,000 years. A 128-bit password takes longer than the current age of the universe — multiplied by trillions. The crack times shown are the average case (half the search space), so an attacker might get lucky and find it sooner — but the odds shrink astronomically with each added bit.
Diceware and the EFF word listThe Diceware passphrase generator uses the EFF large word list, a set of exactly 7,776 common English words. The number 7,776 is not a coincidence — it equals 6⁵, because the original Diceware method (created by Arnold Reinhold in 1995) was designed for physical dice. You roll five standard six-sided dice, read the five digits as a number like 24151, look it up in the word list, and that is your word. No computer required.
Each word chosen from a 7,776-word list contributes exactly log₂(7,776) ≈ 12.9 bits of entropy. Six words gives about 77.5 bits — comparable in strength to a fully random 12-character password drawn from the complete printable ASCII set. But a six-word passphrase like vessel-ranch-fading-comet-plank-orbit is far easier to memorize and type correctly than j#8Kw!mQ2&vP. Memorability is not a security weakness here — the security comes from the size of the word list and the randomness of the selection, not from the words being hard to remember.
The EFF specifically selected words that are: common enough to be recognizable, unambiguous to spell out loud, free of homophones (words that sound identical, like "two" and "too"), and not offensive. Short words and words with tricky alternate spellings were removed. The goal was a list a person could use in the field with a printed copy and physical dice — no memorization of rules required.
A critical note: when people choose words they think are random, they are not. Studies consistently show that human-selected passphrases contain far less entropy than their length suggests. People favor short common words, avoid unusual words, unconsciously follow grammatical patterns, and rarely pick the same word twice even when repetition would be statistically normal. All of these patterns reduce the actual search space an attacker needs to explore. This generator solves the problem with crypto.getRandomValues() and rejection sampling — every word on the list has an exactly equal probability of appearing, every time, regardless of what words came before.
Every number system in this converter shares the same underlying mechanism: positional notation. The value of a digit depends not just on what symbol it is, but on where it appears in the number. In base 10, the rightmost position is worth 10⁰ = 1, the next position to the left is worth 10¹ = 10, then 10² = 100, and so on. The number 347 means (3 × 100) + (4 × 10) + (7 × 1). Switch the base and the same logic applies exactly. In base 2, positions are worth 1, 2, 4, 8, 16… In base 16, they are worth 1, 16, 256, 4,096… In any base b, a digit string dn…d1d0 represents the value dn × bn + … + d1 × b + d0. Every field in this converter is a different way of writing the same underlying number.
Base 10: ten fingers and an anatomical accidentHumans count in base 10 for the same reason fingers exist: we have ten of them. Finger counting appears in virtually every pre-literate culture, and when you count on fingers, the natural stopping point — one complete set of hands — becomes the base. The very word digit comes from the Latin digitus, meaning finger. Roman numerals V and X are thought to depict an open hand and two hands crossed.
Base 10 has one mathematical convenience: 10 = 2 × 5, so it divides cleanly by 2 (giving 5) and by 5 (giving 2). But it is not particularly special. Base 12 is mathematically superior for everyday arithmetic — 12 divides evenly into halves, thirds, quarters, and sixths (divisors: 1, 2, 3, 4, 6, 12). This is why the Babylonians counted the three finger segments on each of four fingers to reach 12, and why that number embedded itself in our measurement systems: 12 inches in a foot, 12 months, 12 hours on a clock face. Base 60 (the Babylonian positional system for mathematics and astronomy) is even richer: 60 is divisible by 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, and 30. It survives today in 60 minutes to an hour, 60 seconds to a minute, and 360 degrees in a circle. Base 10 is not a mathematical ideal. It is an anatomical coincidence that became a global convention.
Base 2 (binary): the language of transistorsA transistor is a switch. It can be open or closed — conducting or not conducting. Two states. A single transistor cannot represent the digit 7, because there is no seventh state in the physics. This is why every computer ever built operates in binary. Not because base 2 is mathematically elegant (it produces very long digit strings), but because the hardware is two-state by design. Billions of transistors, each signaling 0 or 1, combine to represent everything from a spreadsheet cell to a video frame.
Claude Shannon formalized the mathematics of this in his 1948 paper A Mathematical Theory of Communication, coining the term bit (binary digit) as the fundamental unit of information. Shannon proved that log₂(N) bits are needed to distinguish among N equally likely possibilities. A coin flip is 1 bit. A standard six-sided die roll is log₂(6) ≈ 2.58 bits. A byte — eight bits — can hold 2⁸ = 256 distinct values, enough to represent any integer from 0 to 255. This tool internally represents all data as sequences of bytes, then converts those byte sequences between different bases for display.
Base 16 (hexadecimal): compact binary for humansBinary is truthful but verbose. The letter H has the ASCII code 72. Written in binary, that's 01001000 — eight digits for a number that fits in two decimal digits. Hexadecimal exists as a shorthand. The key insight: 16 = 2⁴, so each hex digit represents exactly four bits — a unit called a nibble. One byte (eight bits) becomes exactly two hex digits. The value 72 is 0x48 in hex: 4 × 16 + 8 = 72. The word "Hello" is the hex string 48 65 6c 6c 6f — five bytes, ten hex characters. Converting between binary and hex requires no arithmetic; you simply substitute four-bit groups for hex digits and back.
Hexadecimal uses the digits 0–9 for values 0–9 and the letters A–F for values 10–15. Once you know to look, you see it everywhere: web colors (#FF5733 is three bytes of red, green, and blue: 255, 87, 51); memory addresses (0x7FFF0000); cryptographic hashes (sha256: 2cf24dba…); and network hardware identifiers (a MAC address like A4:C3:F0:85:12:3B). Hex is binary wearing a costume that humans can read and write without counting individual zeros and ones.
Early internet protocols — SMTP for email, HTTP for the web — were designed to carry 7-bit ASCII text. They were not built to safely transport arbitrary binary data: encrypted blobs, image files, compiled programs. Routers and mail relays of the era would corrupt or silently strip bytes with the high bit set. Base64 was the workaround: represent any binary data using only safe printable ASCII characters, so it can travel over any text-oriented channel without damage.
The Base64 alphabet has 64 symbols: A–Z (values 0–25), a–z (26–51), 0–9 (52–61), + (62), and / (63). Since 2⁶ = 64, each Base64 character encodes exactly 6 bits. The encoding algorithm takes three input bytes (24 bits) at a time and splits them into four 6-bit groups, each producing one output character. Three bytes in, four characters out — a size overhead of 33%. If the input length isn't a multiple of three, one or two = padding characters are appended to make the output length a multiple of four.
You encounter Base64 constantly: the data:image/png;base64,… strings embedded in web pages; the -----BEGIN RSA PRIVATE KEY----- blocks in this tool (PEM format is just Base64 wrapped in header lines); the tokens in HTTP Authorization: Bearer … headers; email attachments encoded by MIME. Base64 is not encryption — it is purely a transport encoding. Anyone who has the encoded string can decode it trivially. Its only purpose is to make binary data safe to move through text-only systems.
Base64's + and / characters cause problems in URLs, filenames, and database keys, where those symbols carry special meaning. Base62 removes them, using only 0–9, a–z, and A–Z — all 62 alphanumeric characters. The trade-off is mild inefficiency: you need slightly more characters to represent the same data (log₂(62) ≈ 5.95 bits per character, versus 6 bits for Base64). Base62 is the basis of most URL shorteners — the random-looking paths after bit.ly/ are typically Base62-encoded integers — and many short unique identifier schemes, where a compact, copy-pasteable, universally typeable string matters more than maximum encoding density.
In Andy Weir's 2021 novel Project Hail Mary, the protagonist Ryland Grace encounters Rocky, an alien whose species evolved with six limbs. Just as humans arrived at base 10 through fingers, Rocky's people arrived at base 6 through their anatomy. When the two characters need a common mathematical language — neither speaks the other's tongue, and they have no shared reference — they begin by tapping out rhythmic patterns and identifying a shared concept of counting. Base 6 becomes one of the first bridges between their minds, because mathematics is expected to be universal even when everything else differs.
Base 6 uses only the digits 0–5. The number we call "ten" is written as 14 in base 6 (1 × 6 + 4 = 10). The number 36 — six squared — is written as 100. Base 6 has a quiet mathematical virtue: 6 = 2 × 3, so it divides evenly by 2 and by 3, giving it more useful fractions than base 10 for the same number of digits. Weir's use of base 6 reflects a real proposal among SETI researchers: number systems and mathematics are likely universal concepts that any intelligence capable of building a radio telescope or spacecraft would have arrived at independently of biology, geography, or culture.
Base 7: Arrival — the grammar of an alien mindIn Ted Chiang's 1998 short story Story of Your Life — and Denis Villeneuve's 2016 film Arrival — the heptapods are seven-limbed beings whose written language is radically non-linear. Their numerical system is base 7, consistent with their seven-fold symmetry. Base 7 uses only the digits 0–6. The number we call "seven" is written as 10 in base 7 (1 × 7 + 0). Unlike base 6, base 7 is mathematically awkward: 7 is prime, so it has no useful divisors other than 1 and itself. Division and fractions that simplify cleanly in base 6 or base 10 do not simplify in base 7.
What makes Chiang's treatment striking is that base 7 is just one surface symptom of a fundamentally different relationship with time and causality. The story is explicitly about the Sapir-Whorf hypothesis — the idea that the language you use shapes how you perceive reality. The heptapods' numerical base, their non-sequential writing system, and their experience of time are all of a piece: not just a different convention for the same underlying reality, but a genuinely different cognitive architecture. Base 7 signals the reader that translation isn't merely a vocabulary problem.
How this converter works: from text to bytes to numbersEvery field in this converter is a different representation of the same underlying data: a sequence of bytes. When you type in the Text field, the browser encodes your characters as UTF-8 — a byte encoding in which standard ASCII characters occupy one byte each, and characters outside ASCII occupy two to four bytes. The letter H is byte value 72 (hexadecimal 0x48). The word "Hello" produces the five bytes [72, 101, 108, 108, 111].
For the numeric bases (binary, base 6, base 7, decimal, hexadecimal, Base62), those bytes are assembled into a single large integer in big-endian order — most significant byte first. "Hello" becomes: 72 × 256⁴ + 101 × 256³ + 108 × 256² + 108 × 256 + 111 = 310,939,249,775. That integer is then written out in whatever target base you select. Base64 is handled differently: rather than treating the data as one giant integer, it works in three-byte chunks — see the Base64 section above for why.
To convert to a base: repeatedly divide the integer by the base and collect the remainders. The first remainder is the rightmost digit; the last non-zero quotient becomes the leftmost digit. To convert back, multiply each digit by the base raised to the power of its position and sum the results. This converter uses JavaScript's built-in BigInt type for all base conversions, which handles integers of arbitrary size without the rounding errors that would corrupt values in regular floating-point arithmetic.
Leading zero bytes need special handling. If your byte sequence begins with 0x00 — common in cryptographic keys, padding, and some binary protocols — a naive BigInt conversion silently discards them, since leading zeros contribute nothing to an integer's value. This converter preserves them: it counts leading zero bytes before conversion, then prepends the corresponding number of zero-digit characters to the output. Reversing the process works symmetrically — leading zero characters in any numeric field map back to leading 0x00 bytes in the byte sequence.
Hash functions: related to base conversion, but fundamentally differentEvery base converter field above represents the same underlying data. Convert from any one to any other and you can always convert back — information is perfectly preserved. A cryptographic hash function breaks that rule deliberately. Given any input of any length, it produces a fixed-length output called a digest. The same input always produces the same digest. But the function is entirely one-way: you cannot reconstruct the input from the digest, and changing even a single bit of the input completely scrambles the output — a property called the avalanche effect.
Hash functions are defined by three security properties. Preimage resistance: given a digest, it must be computationally infeasible to find any input that produces it. Second preimage resistance: given a specific input, it must be infeasible to find a different input with the same digest. Collision resistance: it must be infeasible to find any two distinct inputs that produce the same digest. A function that fails any of these is considered cryptographically broken — even if it still works as a non-security checksum for detecting accidental corruption.
The hash fields below the converter are read-only outputs. They always reflect the current byte sequence, but you cannot type into them to work backwards — there is no backwards to speak of.
MD5: fast, well-known, and cryptographically brokenMD5 (Message Digest Algorithm 5) was designed by Ron Rivest in 1991. It produces a 128-bit digest written as 32 lowercase hex digits. Like all Merkle-Damgård hash functions, it works by padding the input to a multiple of 512 bits, then processing one 512-bit chunk at a time. Each chunk updates a running 128-bit state, and the final state is the digest.
Each chunk drives 64 rounds of mixing across four 32-bit state variables (A, B, C, D). The 64 rounds are split into four groups of 16, each using a different nonlinear function — designed so that each function targets different bit-level relationships among the state variables:
F(b,c,d) = (b AND c) OR (NOT b AND d) — selects c or d based on bG(b,c,d) = (b AND d) OR (c AND NOT d) — selects b or c based on dH(b,c,d) = b XOR c XOR d — parity of all threeI(b,c,d) = c XOR (b OR NOT d) — a final mixing functionEach of the 64 steps: take the current A, apply the round function plus one 32-bit word from the input chunk plus a per-step constant derived from the sine function (K[i] = ⌊2³² × |sin(i+1)|⌋), rotate left by a fixed shift amount, and add to B. Then rotate the state variables: B→A, C→B, D→C, and the new value → D. After all 64 steps, add the block's output back to its starting state. Repeat for every chunk.
MD5 was broken by Xiaoyun Wang and Hongbo Yu in 2004, who demonstrated practical collision attacks in under an hour. By 2008, attackers had used MD5 collisions to forge fraudulent SSL certificates trusted by all browsers. MD5 is cryptographically dead for any security purpose. It remains useful as a fast non-security checksum — verifying a file wasn't accidentally corrupted in transit — because accidental corruption is not an adversarial attack.
SHA-256: the modern standardSHA-256 is part of the SHA-2 family, designed by the NSA and standardized by NIST in 2001. It produces a 256-bit digest written as 64 hex digits. It also uses the Merkle-Damgård construction on 512-bit blocks, but with significantly more internal state and more complex mixing than MD5.
SHA-256 maintains eight 32-bit state variables per block (versus MD5's four) and runs 64 rounds using two more sophisticated auxiliary functions: Ch(e,f,g) = (e AND f) XOR (NOT e AND g), which chooses bits from f or g depending on each bit of e; and Maj(a,b,c) = (a AND b) XOR (a AND c) XOR (b AND c), which takes the majority bit. These are combined with four rotation-based diffusion functions (Σ₀, Σ₁, σ₀, σ₁), each XOR-ing together three different rotations of a word, ensuring that information from every bit position spreads into every other position over successive rounds.
SHA-256's 64 round constants are the first 32 bits of the fractional parts of the cube roots of the first 64 prime numbers. The eight initial state values are the first 32 bits of the fractional parts of the square roots of the first eight primes. These are nothing-up-my-sleeve numbers — generated by a public, deterministic process so that no designer could have secretly chosen constants that create hidden weaknesses. SHA-256 has been subjected to over two decades of intense cryptanalysis without a practical break. It is the backbone of TLS certificates, Git commit hashes, Bitcoin's proof-of-work, HMAC authentication, and most modern code signing systems.
All traditional encryption has a fundamental problem: both people need to already share a secret key before they can communicate privately. But how do you share a key without a secure channel to share it over? For centuries, this had no good answer. Spies used physical dead drops. Diplomats used diplomatic pouches. Ordinary people just could not communicate privately with strangers at all.
Public-key cryptography, developed in the 1970s by Whitfield Diffie, Martin Hellman, and separately by Rivest, Shamir, and Adleman, solved this with a clever mathematical insight: build a lock that anyone can close but only one person can open. The lock is the public key. The key to open it is the private key. You publish your public key to the world. Anyone can use it to lock a message meant for you. Only you, holding the private key, can unlock it. No secure channel needed to share the lock — the whole point is that the lock itself is safe to share openly.
The math behind RSA — the factoring trapdoorRSA security rests on a simple asymmetry in arithmetic: multiplying two numbers is easy; un-multiplying (factoring) is hard.
Take two prime numbers — say, 61 and 53. Multiplying them together to get 3,233 takes an instant. But if someone hands you 3,233 and asks you to find the two primes that multiply to make it, you have to search. For small numbers, you can find the answer quickly. For very large numbers — hundreds of digits long — every known algorithm takes longer than the age of the universe, even on the fastest computers ever built.
RSA chooses two enormous secret prime numbers, multiplies them together to form the public modulus, then uses properties of modular arithmetic to construct a matched pair of mathematical operations. One operation — encryption — can only be done with the public key. The reverse operation — decryption — can only be done by someone who knows the original two prime factors. Those factors are mathematically embedded in the private key. Anyone who could factor the public modulus could reconstruct the private key, but no one can factor a number that large in practice.
What "2048-bit" actually meansThe bit count refers to the size of the public modulus — the large number produced by multiplying the two secret primes. A 2048-bit number contains 2,048 binary digits. Written out in ordinary decimal, it would be about 617 digits long. The total number of possible 2048-bit RSA keys is staggeringly larger than the number of atoms in the observable universe (roughly 10⁸⁰). The best known factoring algorithms, run on every computer on Earth working in parallel, would still take far longer than the current age of the universe to crack a 2048-bit key.
RSA has two serious practical problems for encrypting real messages. First, it has a hard size limit: you can only encrypt data smaller than the public modulus. For a 2048-bit key, after accounting for required padding, that is about 245 bytes — barely enough for a couple of sentences. Second, RSA is slow. The mathematical operations involved are thousands of times slower than the symmetric ciphers used inside your phone and browser every day.
The solution is called hybrid encryption, and it is used by HTTPS, Signal, PGP, SSH, and virtually every other secure communication system in existence. The idea is elegant:
RSA only ever touches 32 bytes — the session key. The actual message content never passes through RSA at all. The session key is generated fresh for every single message, so even if one were somehow compromised, no other message is affected.
What AES-256-GCM is and why it is usedAES (Advanced Encryption Standard) is the world's most widely deployed symmetric cipher. Adopted by the U.S. government in 2001 after a public international competition, it has been analyzed intensively for over two decades and no practical weakness has been found. Every modern CPU — in phones, laptops, and servers — includes dedicated hardware instructions for AES, making it extraordinarily fast. On modern hardware, AES can encrypt data faster than the computer can read it from memory.
256 is the key size in bits. A 256-bit AES key has 2²⁵⁶ possible values — a number with 77 digits. Exhaustively trying every possible AES-256 key is not physically possible with any technology that exists or could ever exist within the laws of thermodynamics. Even a computer the size of a planet, running since the Big Bang, would not have made a dent.
GCM stands for Galois/Counter Mode — the mode that controls how AES is applied to the data. GCM does two jobs at once: it encrypts the data (providing confidentiality), and it computes a 128-bit authentication tag (providing integrity). The authentication tag is a cryptographic fingerprint of the entire encrypted message. If even a single byte of the encrypted output is changed, flipped, corrupted, or tampered with — by accident or by an attacker — decryption detects this and fails with an error. You cannot silently receive a modified message. This combined property is called authenticated encryption, and it is why GCM is preferred over older modes that only encrypted but did not detect tampering.
GCM also requires a unique initialization vector (IV) — a random 12-byte value — for every encryption operation. Even if you encrypt the same message twice with the same AES key, two different IVs produce two completely different encrypted outputs. An attacker watching the traffic cannot tell when the same message is being sent repeatedly. The IV is not secret and travels alongside the encrypted message.
OAEP padding — why raw RSA encryption is never usedTextbook RSA encryption (no padding) has a fundamental weakness: it is deterministic. The same input always produces the same output. An attacker who suspects the plaintext is one of a small set of candidates — "yes" or "no," for example — can encrypt each candidate with the public key and compare to the captured message. This instantly confirms which it was. Several additional mathematical attacks also work against unpadded RSA.
OAEP (Optimal Asymmetric Encryption Padding) fixes this by mixing random bytes into the input before RSA processes it. The same message encrypted twice produces two completely different RSA outputs, making the candidate-comparison attack impossible. OAEP has a formal security proof, meaning its security against certain attack models can be mathematically demonstrated rather than merely assumed. It is the minimum acceptable padding for RSA encryption. This tool uses RSA-OAEP throughout.
Digital signatures — how RSA-PSS worksA digital signature lets you prove two things about a message: that it came from you specifically, and that it has not been changed since you signed it. Think of it like a wax seal on a letter — the seal can only be made by whoever holds the signet ring (the private key), and breaking the seal to tamper with the letter is immediately obvious.
The signing process has three steps:
PSS (Probabilistic Signature Scheme) adds randomness to the signing process, similar to what OAEP does for encryption. Each signature of the same message looks different, preventing certain mathematical attacks possible against older padding schemes. PSS also has a formal security proof. This tool uses RSA-PSS for all signing operations.
Two separate key pairs — why they must not be mixedClicking Generate Key Pair produces four PEM-formatted keys: an RSA-OAEP pair for encryption and decryption, and an RSA-PSS pair for signing and verification. These are kept separate deliberately. The mathematical operations for encryption and signing are different, and using the same key for both introduces a class of cross-protocol attacks — a valid signature can be misinterpreted as an encrypted blob, or vice versa — that can completely undermine security. Keeping the key pairs separate eliminates this entire category of attack.
The exact format of the encrypted outputWhen you encrypt a message, the tool produces a Base64-encoded package that contains, in this exact order:
Everything runs entirely in your browser using the Web Crypto API, a standard built into every modern browser. No keys, messages, or signatures are ever sent to any server or leave this page in any form.
Encryption workflow