The Romans did not use numerals for calculations, so they did not have the need for a zero to hold a place or keep a column empty. The Roman numeral system was used for trade and they did not need to represent zero with a special symbol.
Who invented zero, and when? THE ancient Greeks were aware of the concept of zero (as in 'We have no marbles'), but didn't think of it as a number. Aristotle had dismissed it because you couldn't divide by zero and get a down-to-earth result.
About 725, Bede or one of his colleagues used the letter N, the initial of nulla or of nihil (the Latin word for "nothing") for 0, in a table of epacts, all written in Roman numerals.
Zero's origins most likely date back to the “fertile crescent” of ancient Mesopotamia. Sumerian scribes used spaces to denote absences in number columns as early as 4,000 years ago, but the first recorded use of a zero-like symbol dates to sometime around the third century B.C. in ancient Babylon.
The Italian mathematician Fibonacci ( c. 1170 – c. 1250), who grew up in North Africa and is credited with introducing the decimal system to Europe, used the term zephyrum. This became zefiro in Italian, and was then contracted to zero in Venetian.
There is strong evidence that zero is an Eastern development that came to the West from India or a civilization with roots in India, such as Cambodia. This would mean that zero is not a Greek or Western invention, as scholars had long thought.
"The one that we got the zero from came from the Fertile Crescent." It first came to be between 400 and 300 B.C. in Babylon, Seife says, before developing in India, wending its way through northern Africa and, in Fibonacci's hands, crossing into Europe via Italy.
A mathematician and astronomer, Aryabhata contributed multiple mathematical concepts, crucial to maths as we know it today, including the value of pi being 3.14 and the formula for a right-angled triangle. The prior absence of zero created difficulty in carrying out simple calculations.
The first recorded use of the word zero in the English language was in 1598. However, the concept is ancient, perhaps first captured by the Sanskrit word śūnya. In ancient Egypt, the word for zero was nefer, a word whose hieroglyphic symbol is a heart with trachea.
They are compounds of no- ("no") and wiht ("thing"). The words "aught" and "ought" (the latter in its noun sense) similarly come from Old English "āwiht" and "ōwiht", which are similarly compounds of a ("ever") and wiht. Their meanings are opposites to "naught" and "nought"—they mean "anything" or "all".
The first recorded zero appeared in Mesopotamia around 3 B.C. The Mayans invented it independently circa 4 A.D. It was later devised in India in the mid-fifth century, spread to Cambodia near the end of the seventh century, and into China and the Islamic countries at the end of the eighth.
The roman number system was basically designed to estimate the prices of goods and trading business. So the roman system did not need any value to represent zero. But instead of zero, the word nulla was used by the Romans to specify zero.
In 1299, zero was banned in Florence, along with all Arabic numerals, because they were said to encourage fraud.
The Romans marked their “year” as being from the founding of the city. Thus, “Year 0” to the Romans is 753 BC. The year 2019 in the Roman calendar is 2772.
The first place-value system was developed by the the Babylonians. They had two cuneiform symbols used for counting: a vertical line to represent one unit, and a chevron to represent ten units.
In around 500AD Aryabhata devised a number system which has no zero yet was a positional system. He used the word "kha" for position and it would be used later as the name for zero.
Factorial of a number in mathematics is the product of all the positive numbers less than or equal to a number. But there are no positive values less than zero so the data set cannot be arranged which counts as the possible combination of how data can be arranged (it cannot). Thus, 0! = 1.
The Modern Form of ZERO
In the ninth century, A Persian mathematician, Mohammed ibn-Musa al-Khowarizmi, worked on equations that equaled zero.
Zu Chongzhi, a Chinese mathematician and astronomer from the 5th century, had made a remarkable achievement by determining the Pi value with an accuracy of seven decimal places, between 3.1415926 and 3.1415927. His calculation remained the world's most accurate for nearly 1,000 years until the 14th century.
But in 1768, the Swiss mathematician Johann Lambert revealed the remarkable fact that it's impossible to use any such fractions to pin down the precise value of Pi, as it just goes on forever.
Albert Einstein did not invent pi. Pi describes the ratio of the circumference of a circle to its diameter and was discovered in ancient times.
About 773 AD the mathematician Mohammed ibn-Musa al-Khowarizmi was the first to work on equations that were equal to zero (now known as algebra), though he called it 'sifr'. By the ninth century the zero was part of the Arabic numeral system in a similar shape to the present day oval we now use.
The invention of zero immensely simplified computations, freeing mathematicians to develop vital mathematical disciplines such as algebra and calculus, and eventually the basis for computers.
Having no zero would unleash utter chaos in the world. Maths would be different ball game altogether, with no fractions, no algebra and no calculus. A number line would go from -1 to 1 with nothing bridging the gap. Zero as a placeholder has lots of value and without it a billion would simply be “1”.