Every time you call Date.now() in JavaScript, query a created_at column in your database, or read an API response with a "timestamp" field, you are working with Unix time. It is one of the most fundamental concepts in software engineering — and also one of the most reliable sources of off-by-one errors, timezone bugs, and confusing production incidents. This guide covers everything you need to work with timestamps correctly.
What Is a Unix Timestamp?
A Unix timestamp (also called Unix time, POSIX time, or epoch time) is the number of seconds that have elapsed since the Unix epoch: January 1, 1970, 00:00:00 UTC. The epoch was chosen somewhat arbitrarily when Unix was being developed at Bell Labs in the early 1970s — it was simply a round number in the recent past.
As of early 2026, the Unix timestamp is approximately 1,775,000,000 seconds. You can verify this right now by opening a browser console and running Math.floor(Date.now() / 1000). The result is a single integer that unambiguously identifies this specific moment in time — not a date in a timezone, not a formatted string, just a count of seconds.
Why Unix Time Exists
Unix timestamps solve several problems that plague date-string representations:
- Timezone-free storage: A timestamp always represents a UTC moment. There is no ambiguity about whether "2025-11-03 01:30:00" is before or after a DST transition — the timestamp is just a number.
- Arithmetic simplicity: Calculating the duration between two events is subtraction. Adding 24 hours is adding 86400. Sorting events chronologically is sorting integers.
- Portability: Every operating system, every programming language, and every database supports Unix timestamps. The integer 1711900800 means the same thing everywhere.
- Compact storage: A 32-bit integer stores a Unix timestamp in 4 bytes. A 64-bit integer covers a range spanning billions of years.
Seconds vs Milliseconds: The JavaScript Gotcha
This is the single most common source of timestamp bugs in JavaScript codebases. The Unix timestamp convention uses seconds. Almost every Unix tool, database, and API uses seconds. But JavaScript's Date.now() and the Date constructor work in milliseconds.
Date.now() // 1711900800000 — milliseconds since epoch Math.floor(Date.now() / 1000) // 1711900800 — seconds since epoch new Date(1711900800) // Wed Jan 01 1975 ... — WRONG: interpreted as ms new Date(1711900800 * 1000) // Thu Mar 31 2024 ... — correct // Gotcha: passing a seconds timestamp to new Date() gives a date in the 1970s // because JS interprets it as milliseconds
When you receive a timestamp from an API, always check the documentation to confirm whether it is in seconds or milliseconds. A quick sanity check: a seconds timestamp for a date in 2024 is approximately 1.7 billion (10 digits). A milliseconds timestamp is approximately 1.7 trillion (13 digits). If you see a 13-digit number, it's milliseconds.
The Year 2038 Problem
Unix timestamps are traditionally stored as a 32-bit signed integer, which has a maximum value of 2,147,483,647. That value corresponds to January 19, 2038 at 03:14:07 UTC. When a system using a 32-bit signed timestamp counter reaches this value and increments by one second, the value overflows to −2,147,483,648, which represents December 13, 1901. This is the Year 2038 problem (Y2K38).
The good news: modern systems have largely addressed this. Linux transitioned the kernel's time_t to a 64-bit type on 32-bit ARM systems in 2021. Most 64-bit operating systems have used 64-bit time since their inception — a 64-bit signed timestamp can represent dates up to approximately the year 292 billion. Database systems like PostgreSQL use 64-bit timestamps.
The remaining risk is in legacy embedded systems, older 32-bit Linux installations, and software that explicitly stores timestamps in 32-bit integer columns in databases. If you are designing a new system today, always use 64-bit storage for timestamps. If you are maintaining legacy code, audit your timestamp storage types now — 2038 is only 12 years away.
Converting Timestamps in JavaScript
// Current timestamp in seconds
const nowSeconds = Math.floor(Date.now() / 1000);
// Seconds timestamp → Date object
const date = new Date(nowSeconds * 1000);
// Date → formatted string in user's local timezone
console.log(date.toLocaleString());
// "3/31/2024, 8:00:00 AM" (varies by locale)
// With explicit timezone
console.log(date.toLocaleString("en-IN", { timeZone: "Asia/Kolkata" }));
// "31/3/2024, 1:30:00 pm"
// ISO 8601 string (always UTC)
console.log(date.toISOString());
// "2024-03-31T08:00:00.000Z"
// Extract components
const year = date.getUTCFullYear();
const month = date.getUTCMonth() + 1; // 0-indexed — add 1
const day = date.getUTCDate();For anything beyond basic formatting in JavaScript, use a library like date-fns or Temporal (the new browser API). The built-in Date API is notoriously awkward: months are 0-indexed, there is no built-in duration arithmetic, and timezone handling beyond toLocaleString requires manual offset calculations.
Converting Timestamps in Python
import datetime
import zoneinfo
# Current timestamp
now = datetime.datetime.now(datetime.timezone.utc)
timestamp = int(now.timestamp())
# Seconds timestamp → datetime (UTC)
dt_utc = datetime.datetime.fromtimestamp(1711900800, tz=datetime.timezone.utc)
print(dt_utc) # 2024-03-31 08:00:00+00:00
# Convert to a specific timezone (Python 3.9+ zoneinfo)
ist = zoneinfo.ZoneInfo("Asia/Kolkata")
dt_ist = dt_utc.astimezone(ist)
print(dt_ist) # 2024-03-31 13:30:00+05:30
# WARNING: datetime.fromtimestamp() without tz uses LOCAL timezone
# This is almost always the wrong thing in server code
dt_local = datetime.datetime.fromtimestamp(1711900800) # Avoid thisAlways pass an explicit tz argument to datetime.fromtimestamp(). Without it, Python uses the local system timezone, which differs between development machines and production servers. This is one of the most common sources of "it works on my machine" timestamp bugs.
Converting Timestamps in SQL
-- PostgreSQL -- Seconds timestamp to timestamptz SELECT to_timestamp(1711900800); -- 2024-03-31 08:00:00+00 -- timestamptz to seconds SELECT EXTRACT(EPOCH FROM NOW()); -- Convert to a specific timezone SELECT to_timestamp(1711900800) AT TIME ZONE 'Asia/Kolkata'; -- 2024-03-31 13:30:00 -- MySQL -- FROM_UNIXTIME converts seconds to datetime in session timezone SELECT FROM_UNIXTIME(1711900800); -- UNIX_TIMESTAMP converts datetime to seconds (session timezone) SELECT UNIX_TIMESTAMP(NOW()); -- SQLite — no native timestamp functions; store as INTEGER SELECT datetime(1711900800, 'unixepoch'); SELECT datetime(1711900800, 'unixepoch', 'localtime'); -- local tz
Timezone Handling: The Core Mental Model
The golden rule of timestamps: store UTC, display local. A Unix timestamp is always and only UTC. It is a count of seconds since a UTC epoch. There is no timezone baked into the number. Timezone conversion is a display concern that happens at the last moment before showing a value to a user.
This means:
- Database columns storing timestamps should always be
TIMESTAMPTZ(timestamp with time zone) in PostgreSQL, which stores UTC internally regardless of how you insert it. In MySQL, useTIMESTAMP(which converts to UTC on store) rather thanDATETIME(which stores whatever you pass in, with no timezone awareness). - API responses should return timestamps in UTC (either as a Unix timestamp integer or as an ISO 8601 string ending in
Z). Let the client convert to the user's local timezone. - Server-side code should never reference the local system timezone when working with timestamps. Use UTC-aware datetime objects throughout, convert to local only at the presentation layer.
Common Timestamp Bugs
Storing Timestamps in Local Timezone
Storing timestamps as local timezone datetime strings is a data corruption waiting to happen. When your server's timezone changes (a cloud instance migration, a DST transition, a deployment to a different region), all previously stored timestamps become misinterpreted. A value that was intended to mean 2024-03-31 08:00 UTC suddenly means 2024-03-31 08:00 IST — off by 5.5 hours. Always store UTC.
The DST Trap with Date Arithmetic
Daylight saving time makes "add one day" ambiguous. One day is sometimes 23 hours, sometimes 25 hours. If you calculate tomorrow by adding 86,400 seconds to a UTC timestamp, you get the correct UTC moment regardless of DST. If you manipulate date strings by incrementing the day field in a local timezone that observes DST, you may arrive at a different UTC moment than you intended. Do arithmetic on UTC timestamps; format for display afterward.
Off-by-One in Date Ranges
"Give me all records from today" is surprisingly ambiguous. Does "today" mean midnight UTC to midnight UTC? Or midnight in the user's timezone? A user in India at 11 PM local time is still in the same calendar day locally, but it's already tomorrow in UTC. Define your date range boundaries explicitly in UTC, and be precise about whether ranges are inclusive or exclusive on each end.
Timestamp Arithmetic
// JavaScript — calculating date differences
const start = Math.floor(new Date("2024-01-01T00:00:00Z").getTime() / 1000);
const end = Math.floor(new Date("2024-03-31T00:00:00Z").getTime() / 1000);
const diffSeconds = end - start;
const diffDays = diffSeconds / 86400; // 90 days
// Adding intervals (always in seconds)
const oneHour = 3600;
const oneDay = 86400;
const oneWeek = 604800;
const thirtyDays = 30 * 86400;
const expiresAt = Math.floor(Date.now() / 1000) + thirtyDays;
// Start of day in UTC (midnight UTC)
const now = Math.floor(Date.now() / 1000);
const startOfDayUTC = now - (now % 86400);
// End of day in UTC (23:59:59 UTC)
const endOfDayUTC = startOfDayUTC + 86400 - 1;ISO 8601 vs Unix Timestamps in APIs
When designing APIs, you have two main choices for timestamp fields: Unix integer timestamps or ISO 8601 strings. Each has tradeoffs:
- Unix integers are compact, unambiguous about timezone (always UTC), trivial to compare and sort, and require no parsing. The downside is that they are not human-readable — you cannot tell at a glance when 1711900800 is.
- ISO 8601 strings (e.g.,
2024-03-31T08:00:00Z) are human-readable and self-describing. The tradeoff: they require parsing, are larger (20+ characters vs 10 digits), and can encode timezone offsets in ways that introduce ambiguity if the API is inconsistent about using Z vs +00:00 vs local offsets.
The practical recommendation: use ISO 8601 strings with a trailing Z (UTC) in JSON API responses for human readability. Use Unix integer timestamps in high-throughput internal systems, log formats, and database indexes where parse overhead and storage size matter.
Convert between Unix timestamps and human-readable dates instantly with Tanvrit's timestamp converter — works in any timezone, right in your browser. Open the Timestamp Converter →