What is Unicode?

Unicode is the most fundamental and is the universal character encoding standard. For every character, there is a unique 4 to 6-digit hexadecimal number known as a Unicode point. Unicode is standardized among all computing platforms, enabling consistent representation and manipulation of text across different systems and applications.

What is Unicode?

Unicode is an international character encoding standard that assigns a unique number to each character across languages and scripts, making practically all characters available across platforms, programs, and devices.

Key Features of Unicode

  • Universal Coverage: Unicode aims to encode all the characters humans use for writing, including letters, symbols, punctuations, emojis, mathematical symbols, etc.
  • Unique Code: Each character in Unicode has a unique 4 to 6-digit hexadecimal number. For Example, the letter ‘A’ has the code 0041, represented as U+0041.
  • Compatible with ASCII:
    • Unicode is compatible with ASCII encoding. This means that the first 128 characters in Unicode directly correspond to the characters represented in the 7-bit ASCII table
    • We can also say that ASCII is a subset of Unicode.
    • But wait! For the character ‘A’, the ASCII representation is 0065 and the unicode point is U+0041. How is it backward compatible with ASCII?
    • This is because the U+0041 is in hexadecimal form! which corresponds to 0065 in Decimal.
    • (0041)16 = (0065)10
  • Flexibility: Unicode is flexible. It allows new characters to be added, supporting the evolving communication and language needs.

History of Unicode

Before the development of Unicode, there were hundreds of different character encodings for assigning letters and other characters to numbers so that computers could read them. Because of its limitations, this system was unable to encode enough characters to cover all of the world’s languages, as well as hold all letters, punctuation, and technical systems in regular use. Conflicts between character encodings also meant that two encodings could use the same number to represent two different characters or even multiple numbers for the same character. Any computer would have to handle various encodings, and this arrangement increased the possibility of data corruption as data moved between different computers or encodings.

Size and Growth

As of today, Unicode supports over 1,49,000 characters! This set continues to grow to accommodate new symbols, emojis, and characters. Here are some characters with their Unicodes:

Character

Unicode

?

U+1F60A

?

U+1F44

1

U+0031

+

U+002B

How To Type in Unicode Characters?

  • Open your computer and log into your Operating System.
  • Opening unicode window.
    • On a Windows machine press the Windows Key (?) + period key (Dot key).
    • On Mac OS press Control + command + space
  • This will open a small window with Unicode characters.
  • Search for the character you want and click on it. The character will appear on the screen.

Unicode Transformation Format (UTF)

Unicode Transformation Format is a method of encoding unicode characters for storage and communication purposes. This format specifies how Unicode characters will be converted into a sequence of bytes. The most common UTF forms are UTF-8, UTF-16, UTF-32.

UTF-8

  • UTF-8 is a variable width encoding system where each character is encoded into 1 to 4-byte unicode points.
  • UTF-8 is backward compatible with ASCII. All the ASCII characters (0-127) and 10 are represented inside UTF-8 (00-F7)16 using one byte.
  • Other Unicode characters in UTF-8 are represented using multiple bytes.
  • UTF-8 is widely used in internet and UNIX-like operating systems.

UTF-16

  • UTF-16 is also a variable width encoding system where each character is encoded into a 2 to 4-byte unicode point.
  • UTF-16 is used in Microsoft Windows OS and programming languages like Java

UTF-32

  • UTF-32 is a fixed-width encoding system where each character is encoded into 4-byte unicode point.
  • This format provides a simple one-to-one correspondence between Unicode characters but makes it less space-efficient, as where it should only take 1 byte of data (Example: 01), it is taking up 4 bytes (Example: 00000001).
  • UTF-32 is less commonly used in mainstream applications and systems due to its space inefficiency and compatibility considerations

Conclusion

In Conclusion, the Unicode Standard assigns a unique number to each character, irrespective of platform, device, application, or language. All modern software companies have implemented it, allowing data to be transmitted over many different platforms, devices, and apps without any data loss.

Frequently Asked Questions on Unicode – FAQs

Are ASCII and Unicode the same?

No, ASCII and Unicode are not same. In fact, ASCII is a subset of Unicode.

Can new characters be added to Unicode?

Yes, new characters can be added inside unicode. Unicode uses 4 to 6-digit Hexadecimal encoding, so theres still lot of unique codes left for future symbols and emojis to be added.

Does Unicode support emojis?

Yes, Unicode supports emojis. Each emoji have a unique code like every other unicode character.


Contact Us