Friday, May 27, 2016

The Year Of 2038 Problem - The Unix Millennium Bug

The Year Of 2038 Problem
The Unix Millennium Bug




Animation showing how the date would reset, represented as a signed 32-bit integer (at 03:14:08 UTC on 19 January 2038).

The Year 2038 problem is an issue for computing and data storage situations in which time values are stored or calculated as a signed 32-bit integer, and this number is interpreted as the number of seconds since 00:00:00 UTC on 1 January 1970 ("the epoch"). Such implementations cannot encode times after 03:14:07 UTC on 19 January 2038, a problem similar to but not entirely analogous to the "Y2K problem" (also known as the "Millennium Bug"), in which 2-digit values representing the number of years since 1900 could not encode the year 2000 or later. Most 32-bit Unix-like systems store and manipulate time in this "Unix time" format, so the year 2038 problem is sometimes referred to as the "Unix Millennium Bug" by association.

The year of 2000 which marked a significant date in the computer industry. Those days raised the heat of Y2K - Year 2000 or Millennium bug with all computing systems. If you remember those days, it was supposed to make a huge negative impact if it were not corrected at right time. The bug was fixed properly which saved the computer industry from great failure.
A similar problem may occur on all Unix based systems on 19th of January, 2038 if they are not upgraded from 32 bit to 64 bit. Here is a note about what happened in 2000 (Y2K) and what will happen in year 2038.

What is Y2K ? Why was it caused ?

Computers during initial days were made to "think" in two digits for year while counting dates. They were done with intent of keeping the first 19 as constant and changing the last two digits like in 1998, 1999 etc. This was supposed to cause a major problem of resetting everything on some programmed computers as a result of overflow of date from 1999 to 1900. Anyway major nations took it seriously and developers succeeded to fix the problem which resulted in reducing losses on a major part.

What is Year 2038 Problem?

Now, its time for Unix for a Millenium Bug, it is also called as Unix Millenium Bug. This is caused due to the Unix time is computed on a binary format and the counting will be valid from 1970 to 2038. On January 19th 2038 at 03:14:37 this resetting will take place and lead the Unix systems to count the date from 1970. This can be fixed by Unix vendors when they switch from 32 bit to 64 bit versions. Thought it is a long time ahead, it could cause a problem for programs which work for very future dates.

Here is how the date gets reset on year 2038.

Note that the image contains an error that the counting will reset to 1901 which is not the same as in the explanation. One more confusion is the time region that wikipedia says as UTC. Do you think the time zone would matter for any system to reset? I think its that time in any region the computer is set for.

Simply say, it is when the computer software can't interpret the date properly past the year 2038 and it reverts back to December 13, 1901. It will cause the software to fail. It is caused because the system time is a 32-bit integer. It is often referred to as the Unix Milennium Bug.

According to Wikipedia the software will "interpret this number as the number of seconds since 00:00:00 UTC on Thursday, 1 January 1970. The furthest time that can be represented this way is 03:14:07 UTC on Tuesday, 19 January 2038. Times beyond this moment will "wrap around" and be stored internally as a negative number, which these systems will interpret as a date in December 13, 1901 rather than January 19, 2038. This is caused by integer overflow. The counter "runs out" of usable digits, "increments" the sign bit instead, and reports a maximally negative number (continuing to count up, toward zero). This is likely to cause problems for users of these systems due to erroneous calculations."

Will affect you?

Most new computers use a 64-bit system and should not have an issue with the programs that run in 64-bit mode. However, older computers, 64-bit computers that have programs running in 32-bit mode, or embedded systems that use dates to compute or run diagnostics will verly likely be affected by the Unix Milennium Bug (2038 bug). Transportation systems and cars have embedded systems. Basically, anything with a computer embedded in it that uses dates, can suffer from this. Think of all of the objects around us that have a computer chip in it.

So why worry now? It is only 2016.

Any embedded system that is 32-bit and uses future dates 24 years in the future must be fixed by 2016.

What do you do ?, what is the solution ?

There is no universal solution for the Year 2038 Bug. Any change to the definition of the time_t data type would result in code compatibility problems in any application in which date and time representations are dependent on the nature of the signed 32-bit time_t integer. For example, changing time_t to an unsigned 32-bit integer, which would extend the range to the year 2106, would adversely affect programs that store, retrieve, or manipulate dates prior to 1970, as such dates are represented by negative numbers. Increasing the size of the time_t type to 64-bit in an existing system would cause incompatible changes to the layout of structures and the binary interface of functions.
The only real solution is to upgrade your equipment to 64-bit systems before the problem affects you. While your own personal equipment may not cause you any issues, the major concern is everything around you. Transportation systems, streetlgihts, etc...

Starting with NetBSD version 6.0 (released in October 2012), the NetBSD operating system uses a 64-bit time_t for both 32-bit and 64-bit architectures. Applications that were compiled for an older NetBSD release with 32-bit time_t are supported via a binary compatibility layer, but such older applications will still suffer from the Year 2038 problem.

OpenBSD since version 5.5, released in May 2014, also uses a 64-bit time_t for both 32-bit and 64-bit architectures. In contrast to NetBSD, there is no binary compatibility layer. Therefore, applications expecting a 32-bit time_t and applications using anything different from time_t to store time values may break.

Linux uses a 64-bit time_t for 64-bit architectures only; the pure 32-bit ABI is not changed due to backward compatibility. There is ongoing work, mostly for embedded Linux systems, to support 64-bit time_t on 32-bit architectures, too.

The x32 ABI for Linux (which defines an environment for programs with 32-bit addresses but running the processor in 64-bit mode) uses a 64-bit time_t. Since it was a new environment, there was no need for special compatibility precautions.

Alternative proposals have been made (some of which are in use), such as storing either milliseconds or microseconds since an epoch (typically either 1 January 1970 or 1 January 2000) in a signed 64-bit integer, providing a minimum range of 300,000 years. Other proposals for new time representations provide different precisions, ranges, and sizes (almost always wider than 32 bits), as well as solving other related problems, such as the handling of leap seconds. In particular, TAI64 is an implementation of the Temps Atomique International standard, the current international real-time standard for defining a second and frame of reference.

Here You Can See..,

Count-down for Y2K38 bug

Share... Facebook Twitter Google Plus LinkedIn Reddit


Blog Archive