Bit, Byte, Word, ...etc; Binary, Hexadecimal & Decimal

General Assembly and Auto Assembler tutorials


Post Reply
User avatar
bbfox
Table Master
Table Master
Journeyman Hacker
Journeyman Hacker
Posts: 178
Joined: Sat Jul 23, 2022 8:59 am
Answers: 0
x 387

Bit, Byte, Word, ...etc; Binary, Hexadecimal & Decimal

Post by bbfox »

This article is for newbies.

This content is not well-organized. The topics may not be in the correct order.

Warning: This post features AI-assisted content. While I created its foundational structure, an AI generated the content, which I then curated. If you prefer not to engage with such material, please use your browser's back button.



Table of Contents
Bit, Byte, Word, ...etc; Binary, Hexadecimal & Decimal
LSB and MSB
Binary & Hexadecimal Representation
Signed & Unsigned value
Real number format: Float & Double
Binary, Decimal & Hexadecimal System
Binary & Hexadecimal Notation
Using Google Calculator to Convert Between Different Notations




Bit, Byte, Word, ...etc; Binary, Hexadecimal & Decimal
Computers operate with the most fundamental unit of data called bit. A bit has two possible states: 0 and 1. Here's a more detailed explanation of the basic units in computing and their ranges:

Bit: The smallest unit of data in a computer, representing a single binary value of either 0 or 1.

Byte: Consists of 8 bits and is the unit of storage to hold a single character of information.
Unsigned range: 28 = 0 to 255.
Signed range: -128 to 127.

Word: Typically refers to the natural unit of data used by a particular processor design. For many 32-bit systems, a word is 2 bytes (16 bits).
Unsigned range: 216 = 0 to 65535.
Signed range: -32768 to 32767.

Double Word: Generally refers to 32 bits (4 bytes).
Unsigned range: 232 = 0 to 4294967295.
Signed range: -2147483648 to 2147483647.

Quad Word: 64 bits (8 bytes).
Unsigned range: 264 = 0 to 18446744073709551615.
Signed range: -9223372036854775808 to 9223372036854775807.
It's important to note that the definition of a "Word" can vary depending on the system architecture. For example, in some 64-bit systems, a "Word" might be defined as 64 bits. However, the definitions provided here are generally applicable in many contexts.



LSB and MSB
A byte is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. The sequence of bits within a byte is numbered starting from 0 to 7, making a total of 8 bits. Here's a visual representation for better understanding:

bit#   7  6  5  4  3  2  1  0
      msb                  lsb

LSB (Least Significant Bit): The LSB is the bit position in a binary number having the lowest value. It is the right-most bit in a byte. In the byte's bit sequence (7 6 5 4 3 2 1 0), bit #0 is the LSB. The value of the LSB represents the smallest unit that can be represented in the byte. For example, in a byte representing an unsigned integer, the LSB contributes a value of either 0 or 1 to the total value.

MSB (Most Significant Bit): The MSB is the bit position in a binary number having the highest value. It is the left-most bit in a byte. In the byte's bit sequence (7 6 5 4 3 2 1 0), bit #7 is the MSB. The value of the MSB has the most significant impact on the total value represented by the byte. For instance, in an 8-bit binary number, the MSB determines the highest value that can be represented (128 in the case of an unsigned byte).

Understanding the concepts of LSB and MSB is crucial for various computing and digital electronics tasks, including data storage, communication protocols, and low-level programming. It defines how data is read from and written to memory, how arithmetic operations are performed, and how data is transmitted over networks.



Binary & Hexadecimal Representation
In low-level computing, data is often represented in binary or hexadecimal format. This is due to the fundamental unit of data in computers being the bit, which can have a value of either 0 or 1.


Binary Representation: Binary is a base-2 numerical system that uses only two symbols, typically 0 and 1. Each digit in a binary number represents a power of 2, with the least significant bit (LSB) representing 20, the next bit representing 21, and so on.

Hexadecimal Representation: Hexadecimal (or hex) is a base-16 numerical system. It is widely used in computing as a more human-friendly representation of binary-coded values. One hex digit can represent four binary digits (bits), as it encompasses sixteen possible values, from 0 to 15. In computing, these values are represented as follows: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A (for 10), B (for 11), C (for 12), D (for 13), E (for 14), and F (for 15). Therefore, a single byte (which is 8 bits) can be represented as two hexadecimal digits.


Example of Binary to Hexadecimal Conversion:

Let's convert the binary number 1011 1101 to hexadecimal.

Divide the binary number into groups of four starting from the right: 1011 1101 becomes 1011 and 1101.
Convert each group into its hexadecimal equivalent:
1011 in binary is B in hexadecimal (since 1x23 + 0x22 + 1x21 + 1x20 = 8 + 0 + 2 + 1 = 11).
1101 in binary is D in hexadecimal (since 1x23 + 1x22 + 0x21 + 1x20 = 8 + 4 + 0 + 1 = 13).
Therefore, 1011 1101 in binary is BD in hexadecimal.
Hexadecimal representation offers a compact and understandable way to express binary values, significantly simplifying the handling and visualization of binary data in programming, debugging, and documentation within the field of computing.



Signed & Unsigned value
In computer science, numerical data can be represented in two ways: signed and unsigned. The distinction between these two types is crucial for understanding how numbers, especially those representing positive and negative values, are stored and manipulated in computing systems.

Unsigned Value: An unsigned number is always positive or zero and uses all the bits available to represent the number's magnitude. For example, in an 8-bit unsigned representation, the numbers range from 0 to 255.

Signed Value: Signed numbers can represent both positive and negative values. One common method to achieve this is by using the most significant bit (MSB) as a sign bit. In this scheme, known as two's complement,

If the MSB is 0, the number is positive (including zero).
If the MSB is 1, the number is negative.

Example for word type (16-bit):

Positive value, MSB = 0
bit#   F  E  D  C  B  A  9  8  7  6  5  4  3  2  1  0
       0  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 
	  
Negitave value, MSB = 1 bit# F E D C B A 9 8 7 6 5 4 3 2 1 0 1 . . . . . . . . . . . . . . .

In the two's complement representation, the positive numbers are represented as usual, but negative numbers are represented differently. To find the two's complement (and thereby the binary representation of a negative number), you invert all the bits of the number's absolute value and then add 1.


Example for a Word (16-bit)

Let's consider a 16-bit word, which allows us to represent numbers in both signed and unsigned formats.

Unsigned 16-bit: The range is from 0 to 216 - 1 (0 to 65535).
Signed 16-bit (Two's Complement): The range is from -215 to 215 - 1 (-32768 to 32767).
For a positive number example, let's use +12345:

In binary, 12345 is 0011 0000 0011 1001 (no need for sign bit manipulation as the number is positive).
For a negative number example, let's use -12345:

First, find the binary representation of the positive value +12345, which is 0011 0000 0011 1001.
Invert the bits: 1100 1111 1100 0110.
Add 1 to the inverted bits: 1100 1111 1100 0111.
Thus, the two's complement representation of -12345 in a 16-bit word is 1100 1111 1100 0111, indicating it's a negative number due to the MSB being 1.

This system allows for straightforward arithmetic operations and simplifies the circuitry for addition and subtraction in computing hardware, as the same circuit can handle both positive and negative numbers.



Real number format: Float & Double
In computer science, floating-point numbers are used to represent real numbers (numbers that can have fractional parts) with a certain degree of precision. The term "floating point" refers to the fact that the decimal point can "float"; that is, it can be placed anywhere relative to the significant digits of the number. This flexibility allows floating-point numbers to represent a very wide range of values.

Float:
A float, short for "floating-point number", typically refers to a single-precision floating-point format that uses 32 bits (4 bytes) of memory. In this format, the number is divided into three parts: the sign (1 bit), the exponent (8 bits), and the mantissa or significand (23 bits).
The sign bit determines whether the number is positive or negative. The exponent encodes the magnitude of the number, and the mantissa carries the significant digits of the number.
The 32-bit float has a limited precision and can represent numbers approximately within the range of 1.4E-45 to 3.4E+38.

Example of a float: Consider the number 0.15625. In binary, the mantissa would be 0.00101 (ignoring leading zeros). Assuming an IEEE 754 single-precision format, this number could be represented in memory (with normalization and bit shifting for the exponent).

0.15625
hexadecimal = 0x3e200000

hex.        3 |       e |       2 |      0  |       0 |       0 |       0 |      0  |
bin.  0 0 1 1 | 1 1 1 0 | 0 0 1 0 | 0 0 0 0 | 0 0 0 0 | 0 0 0 0 | 0 0 0 0 | 0 0 0 0 |
-------+-------------------+--------------------------------------------------------+
  sign |    exponent       |                        mantissa                        |

  
0.15625 * 2 = 0.3125 (0) 0.3125 * 2 = 0.625 (0) 0.625 * 2 = 1.25 (1) 0.25 * 2 = 0.5 (0) 0.5 * 2 = 1.0 (1) 0 * 2 = 0 0.15625 in binary = 0.00101 (read number in front of decimal from top) 0.00101 = 1.01 * 2-3 exponent = -3 + 127 = 124 = 01111100 in binary sign + exponent + mantissa = (binary format) 0 + 01111100 + 01000000000000000000000 = 0x3e200000 in hexadecimal.

Double:
A double represents a double-precision floating-point format that uses 64 bits (8 bytes) of memory. Compared to single precision, double precision increases the size of both the exponent (11 bits) and the mantissa (52 bits).
This increased size allows for a much wider range and greater precision. Doubles can approximately represent numbers in the range of 4.9E-324 to 1.8E+308 with much greater precision than floats.

Example of a double: Consider again the number 0.15625. As a double, this number would still begin with 0.00101 in binary for the mantissa but with more bits available for the fraction, allowing for a more precise representation and a larger range for the exponent.
The choice between using float and double depends on the requirements for precision and range in a given application. Floats consume less memory and can be sufficient for many tasks, but when calculations require a higher degree of accuracy or involve very large or very small numbers, doubles are preferred despite their larger memory footprint and potential performance impact.

I'm not familiar with how to convert float and double to hexadecimal value. Just use online converter will be fine.
Just remember float is 32-bit, double is 64-bit.



Binary, Decimal & Hexadecimal System
In computing and digital electronics, numerical values are often represented in binary (base-2), decimal (base-10), and hexadecimal (base-16). The decimal system is the standard system for denoting integer and non-integer numbers. It is the most familiar number system to the general public and is commonly used in most areas of life.

Binary System (Base-2): Uses two symbols, 0 and 1. Each digit represents a power of 2.

Decimal System (Base-10): Uses ten symbols, from 0 to 9. Each digit represents a power of 10.

Hexadecimal System (Base-16): Uses sixteen symbols, from 0 to 9 and A to F, where A to F represent decimal values 10 to 15. Each digit represents a power of 16.


Conversion Examples
Decimal to Binary

To convert from decimal to binary, divide the number by 2 and record the remainder. Continue dividing the quotient by 2 until you reach 0. The binary representation is the remainders read in reverse order.

Example: Decimal 10 to Binary.

10 ÷ 2, remainder is 0.
5 ÷ 2, remainder is 1.
2 ÷ 2, remainder is 0.
1 ÷ 2, remainder is 1 and quotient is 0. 

Reading the remainders backward gives 1010.

Example: Convert decimal 22398 to binary.

22398 ÷ 2 = 11199 remainder 0
11199 ÷ 2 = 5599  remainder 1
 5599 ÷ 2 = 2799  remainder 1
 2799 ÷ 2 = 1399  remainder 1
 1399 ÷ 2 = 699   remainder 1
  699 ÷ 2 = 349   remainder 1
  349 ÷ 2 = 174   remainder 1
  174 ÷ 2 = 87    remainder 0
   87 ÷ 2 = 43    remainder 1
   43 ÷ 2 = 21    remainder 1
   21 ÷ 2 = 10    remainder 1
   10 ÷ 2 = 5     remainder 0
    5 ÷ 2 = 2     remainder 1
    2 ÷ 2 = 1     remainder 0
    1 ÷ 2 = 0     remainder 1

Reading the remainders backward, we get 101 0111 0111 1110.

So, decimal 22398 in binary is 101 0111 0111 1110.

Decimal to Hexadecimal

To convert from decimal to hexadecimal, divide the number by 16 and record the remainder. Continue dividing the quotient by 16 until you reach 0. The hexadecimal representation is the remainders read in reverse order, using A to F for remainders 10 to 15.

Example: Decimal 10 to Hexadecimal.

10 ÷ 16 is 0 with a remainder of 10.

Since the remainder is 10, it corresponds to 'A' in hexadecimal.

So, decimal 10 is A in hexadecimal.

Example: Convert decimal 22398 to hexadecimal.

22398 ÷ 16 = 1399 remainder 14 (E in hexadecimal)
 1399 ÷ 16 = 87   remainder 7
   87 ÷ 16 = 5    remainder 7
    5 ÷ 16 = 0    remainder 5
Reading the remainders backward, we get 577E.

So, decimal 22398 in hexadecimal is 577E.

Simply use the built-in Windows calculator to perform binary, decimal, and hexadecimal number conversions.



Binary & Hexadecimal Notation

Hexadecimal Notation:

0x Prefix: The 0x (zero followed by lowercase 'x' or uppercase 'X') prefix is commonly used in many programming languages (like C, C++, C#, Java, Python, and more) to denote a hexadecimal value. For example, 0x1A3F represents the hexadecimal number 1A3F.

h Suffix: Alternatively, some assembly languages and older programming environments use an h at the end of the number to indicate that it is hexadecimal. In this case, a letter or underscore is often required at the beginning of the number if it starts with a digit, to avoid confusion with decimal numbers. For example, 1A3Fh or _1A3Fh represents the hexadecimal number 1A3F.

in CE assembly, default format is hexadecimal, place # in front of the value to notate as decimal.


Binary Notation:

0b Prefix: Many modern programming languages use the 0b (zero followed by lowercase 'b' or uppercase 'B') prefix to signify binary numbers. For instance, 0b11010110 represents the binary number 11010110 in languages like Python.

b Suffix: Binary numbers are often represented with a b at the end of the number in certain contexts, such as assembly languages or documentation. This makes it clear that the number is binary. For example, 11010110b indicates a binary number.

These conventions are crucial for readability and clarity in code, ensuring that numbers are interpreted correctly according to their intended base by both humans and compilers/interpreters.



Using Google Calculator to Convert Between Different Notations
We can use Google's built-in calculator to convert values between different notations:

Example:
Convert 0x6787a to decimal. Type the following words in Chrome's URL address input field:

0x6787a in decimal

Result will be 424058

Other Examples:

0x6787a in binary
0b11001010 in hex
816923 in hex
98127638 in binary

Calculate via Google calculator:

0x67a + 0x770
0x14203380 + 0x8*0x10

This only works in Google Search. Bing do not support it.


I create tables to suit my preferences. Table is free to use, but need to leave the author's name and source URL: https://opencheattables.com.
Table will not be up-to-date. Feel free to modify it, but kindly provide credit to the source.


Post Reply