What is a Number Base?
A number base (or radix) defines how many unique digits are used to represent numbers. Decimal uses 10 digits (0–9), binary uses 2 (0–1), octal uses 8 (0–7), and hexadecimal uses 16 (0–9 plus A–F). The value 255 in decimal is FF in hex, 11111111 in binary, and 377 in octal — they all represent the same number.
Common Uses
- Binary — how computers store data at the hardware level; every bit is 0 or 1
- Hexadecimal — memory addresses, color codes (#FF5733), byte-level data, checksums
- Octal — Unix file permissions (chmod 755), some legacy systems
Quick Reference
| Decimal | Binary | Octal | Hex |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 1 | 1 | 1 | 1 |
| 2 | 10 | 2 | 2 |
| 4 | 100 | 4 | 4 |
| 8 | 1000 | 10 | 8 |
| 10 | 1010 | 12 | a |
| 15 | 1111 | 17 | f |
| 16 | 10000 | 20 | 10 |
| 32 | 100000 | 40 | 20 |
| 64 | 1000000 | 100 | 40 |
| 128 | 10000000 | 200 | 80 |
| 255 | 11111111 | 377 | ff |
Frequently Asked Questions
What is a number base?
A number base (or radix) defines how many unique digits a numbering system uses before it rolls over. Decimal (base 10) uses digits 0–9. Binary (base 2) uses only 0 and 1. Hexadecimal (base 16) uses 0–9 plus A–F. Computers process everything internally as binary, but hex and octal are human-friendly shorthand for binary data.
How do I convert a binary number to decimal?
Write out the binary number and assign each digit a positional value that is a power of 2, starting from the right at 2⁰. Multiply each digit by its positional value and add all the results. For example, binary 1011 = (1×8) + (0×4) + (1×2) + (1×1) = 11 in decimal. This converter does it instantly for any supported base.
Why do programmers use hexadecimal instead of binary?
One hex digit represents exactly four binary bits (a nibble), so a byte (8 bits) fits in just two hex characters. That makes hex far more readable than raw binary — for example, the binary value 11111111 is simply FF in hex. Memory addresses, color codes (#FFFFFF), and debugging output all use hex for this reason.
What is the difference between octal and hexadecimal?
Octal (base 8) uses digits 0–7 and represents binary in 3-bit groups. Hexadecimal (base 16) uses digits 0–9 and A–F and represents binary in 4-bit groups. Octal was more common on older Unix systems; hexadecimal is the dominant choice in modern computing for memory addresses, color values, and bytecode.