# 1-bit and 2-bit Character Decoder

In this lab, you will be building a 1-bit and 2-bit character decoder. The goal is to determine if a given binary array represents a one-bit character or not. You will implement a function `isOneBitCharacter`

that takes a binary array `bits`

as input, and returns `true`

if the last character is a one-bit character, and `false`

otherwise.

We have two special characters:

- The first character can be represented by one bit
`0`

. - The second character can be represented by two bits (
`10`

or`11`

).

The input binary array `bits`

always ends with `0`

.

Consider the following examples:

**Example 1:**

**Input:** bits = `[1, 0, 0]`

**Output:** true
**Explanation:** The only way to decode it is two-bit character and one-bit character. So the last character is a one-bit character.

**Example 2:**

**Input:** bits = `[1, 1, 1, 0]`

**Output:** false
**Explanation:** The only way to decode it is two-bit character and two-bit character. So the last character is not a one-bit character.

Your function should have the following constraints:

`1 <= bits.length <= 1000`

`bits[i]`

is either`0`

or`1`

.

## Challenges

- Write the
`isOneBitCharacter`

function. - Export the
`isOneBitCharacter`

function correctly using ESM import/export.