The Complete Guide to Unix Timestamps
If you've spent more than five minutes reading server logs, debugging distributed systems, or storing dates in a database, you've encountered Unix timestamps. They are one of computing's most fundamental and universally agreed-upon conventions, and yet they trip up developers constantly.
This guide covers everything you need to know: what Unix timestamps are, how they work under the hood, how to use them in every major programming language, and the subtle gotchas that cause production bugs at 2 AM.
What Is a Unix Timestamp?
A Unix timestamp (also called Unix time, POSIX time, or epoch time) is the number of seconds that have elapsed since January 1, 1970, at 00:00:00 UTC. That specific moment is known as the Unix epoch.
For example:
0= January 1, 1970 00:00:00 UTC1000000000= September 9, 2001 01:46:40 UTC1700000000= November 14, 2023 22:13:20 UTC
The idea is simple: instead of representing dates as a complex structure of years, months, days, hours, minutes, and seconds, just count the seconds from a fixed reference point. One number. No ambiguity about timezones, calendar systems, or formatting conventions.
Try converting timestamps right now with our Epoch Converter.
History and Origin
The Unix epoch was established in the early 1970s as part of the Unix operating system developed at Bell Labs. The original Unix time was measured in 60ths of a second, but this was changed to whole seconds by the time of Unix Version 6 in 1975.
The choice of January 1, 1970 was somewhat arbitrary. The original epoch was January 1, 1971, but was moved back to 1970 for a rounder number. The engineers needed a recent-enough date that wouldn't waste bits on representing large values, but far-enough back to cover practical use cases.
The decision to use seconds since a fixed epoch has proven remarkably durable. Over 50 years later, virtually every operating system, programming language, and database engine supports Unix timestamps as a first-class concept.
How Unix Timestamps Work
At its core, a Unix timestamp is just an integer. Here's the mapping:
- Epoch (time zero): January 1, 1970, 00:00:00 UTC
- Counting direction: Forward from epoch = positive integers. Backward = negative integers.
- Unit: Seconds (for the classic format)
- Timezone: Always UTC. A timestamp represents a single, unambiguous moment in time.
The calculation is straightforward. If the current time is March 15, 2025 at 14:30:00 UTC, the timestamp is the total number of seconds between that moment and January 1, 1970 00:00:00 UTC.
This means:
86400= exactly one day after epoch (60 x 60 x 24 = 86,400 seconds)604800= exactly one week after epoch31536000= approximately one year after epoch (365 days)
Timestamp Formats: Seconds, Milliseconds, Microseconds, Nanoseconds
While the classic Unix timestamp counts seconds, modern systems often use finer granularity. You can usually tell the format by the number of digits:
| Format | Digits | Example | Used By |
|---|---|---|---|
| Seconds | 10 | 1700000000 | Unix, PHP, Python, Ruby, most backends |
| Milliseconds | 13 | 1700000000000 | JavaScript, Java, Elasticsearch, AWS |
| Microseconds | 16 | 1700000000000000 | PostgreSQL extract(epoch), some C libraries |
| Nanoseconds | 19 | 1700000000000000000 | Go time.UnixNano(), InfluxDB, high-precision logging |
This is one of the most common sources of timestamp bugs. If you see a timestamp and it looks "too big," count the digits. A 13-digit number is almost certainly milliseconds, not seconds.
Quick conversions:
- Seconds to milliseconds: multiply by 1,000
- Seconds to microseconds: multiply by 1,000,000
- Seconds to nanoseconds: multiply by 1,000,000,000
Our timestamp converter auto-detects the format for you.
How to Get and Convert Timestamps in Every Major Language
Here are practical, copy-paste-ready code examples for working with Unix timestamps in 9 languages.
JavaScript / TypeScript
// Current timestamp in seconds
const nowSec = Math.floor(Date.now() / 1000);
// Current timestamp in milliseconds (JS native)
const nowMs = Date.now();
// Date object to timestamp
const date = new Date('2025-03-15T14:30:00Z');
const ts = Math.floor(date.getTime() / 1000);
// Timestamp to Date object
const fromTs = new Date(1700000000 * 1000); // seconds to ms
// Timestamp to ISO string
new Date(1700000000 * 1000).toISOString();
// "2023-11-14T22:13:20.000Z"
Watch out: JavaScript's Date.now() and date.getTime() return milliseconds, not seconds. This is the single most common timestamp bug in web development.
Python
import time
from datetime import datetime, timezone
# Current timestamp in seconds (float)
now = time.time()
# Current timestamp in seconds (integer)
now_int = int(time.time())
# datetime to timestamp
dt = datetime(2025, 3, 15, 14, 30, 0, tzinfo=timezone.utc)
ts = int(dt.timestamp())
# Timestamp to datetime (UTC)
from_ts = datetime.fromtimestamp(1700000000, tz=timezone.utc)
# Timestamp to ISO string
from_ts.isoformat()
# "2023-11-14T22:13:20+00:00"
Watch out: datetime.fromtimestamp() without a tz argument returns a naive datetime in the system's local timezone. Always pass tz=timezone.utc for unambiguous results.
Java
import java.time.Instant;
import java.time.ZoneOffset;
import java.time.LocalDateTime;
// Current timestamp in seconds
long nowSec = Instant.now().getEpochSecond();
// Current timestamp in milliseconds
long nowMs = System.currentTimeMillis();
// Timestamp to Instant
Instant instant = Instant.ofEpochSecond(1700000000L);
// Instant to LocalDateTime
LocalDateTime ldt = LocalDateTime.ofInstant(instant, ZoneOffset.UTC);
// LocalDateTime to timestamp
long ts = ldt.toEpochSecond(ZoneOffset.UTC);
Watch out: System.currentTimeMillis() returns milliseconds. Instant.getEpochSecond() returns seconds. Mixing them up will put your dates in the year 55,000.
Go
package main
import (
"fmt"
"time"
)
func main() {
// Current timestamp in seconds
nowSec := time.Now().Unix()
// Current timestamp in nanoseconds
nowNano := time.Now().UnixNano()
// Current timestamp in milliseconds
nowMs := time.Now().UnixMilli()
// Timestamp to time.Time
t := time.Unix(1700000000, 0)
// time.Time to formatted string
fmt.Println(t.UTC().Format(time.RFC3339))
// "2023-11-14T22:13:20Z"
}
Go's time package is one of the best-designed date/time APIs. It natively supports seconds, milliseconds, microseconds, and nanoseconds.
Ruby
# Current timestamp in seconds
now = Time.now.to_i
# Current timestamp as float (with fractions)
now_f = Time.now.to_f
# Timestamp to Time object
from_ts = Time.at(1700000000).utc
# Time to ISO 8601 string
from_ts.iso8601
# "2023-11-14T22:13:20Z"
# Parse a date string to timestamp
require 'time'
Time.parse("2025-03-15T14:30:00Z").to_i
PHP
// Current timestamp in seconds
$now = time();
// Current timestamp in milliseconds
$nowMs = (int)(microtime(true) * 1000);
// Timestamp to date string
echo date('Y-m-d H:i:s', 1700000000);
// "2023-11-14 22:13:20"
// Date string to timestamp
$ts = strtotime('2025-03-15 14:30:00 UTC');
// DateTime object
$dt = new DateTime('@1700000000');
$dt->setTimezone(new DateTimeZone('UTC'));
echo $dt->format(DateTime::ATOM);
// "2023-11-14T22:13:20+00:00"
Watch out: PHP's date() function uses the server's default timezone unless you explicitly set it. Use gmdate() for UTC or set date_default_timezone_set('UTC').
C# / .NET
using System;
// Current timestamp in seconds
long nowSec = DateTimeOffset.UtcNow.ToUnixTimeSeconds();
// Current timestamp in milliseconds
long nowMs = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds();
// Timestamp to DateTimeOffset
var dto = DateTimeOffset.FromUnixTimeSeconds(1700000000);
// DateTimeOffset to ISO string
Console.WriteLine(dto.ToString("o"));
// "2023-11-14T22:13:20.0000000+00:00"
Rust
use std::time::{SystemTime, UNIX_EPOCH};
fn main() {
// Current timestamp in seconds
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("Time went backwards")
.as_secs();
// Current timestamp in milliseconds
let now_ms = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("Time went backwards")
.as_millis();
// For more ergonomic date handling, use the `chrono` crate:
// use chrono::{DateTime, Utc, TimeZone};
// let dt = Utc.timestamp_opt(1700000000, 0).unwrap();
// println!("{}", dt.to_rfc3339());
}
Swift
import Foundation
// Current timestamp in seconds
let now = Int(Date().timeIntervalSince1970)
// Current timestamp in milliseconds
let nowMs = Int(Date().timeIntervalSince1970 * 1000)
// Timestamp to Date
let date = Date(timeIntervalSince1970: 1700000000)
// Date to ISO 8601 string
let formatter = ISO8601DateFormatter()
print(formatter.string(from: date))
// "2023-11-14T22:13:20Z"
Shell (Bash)
# Current timestamp
date +%s
# Timestamp to human-readable (GNU date)
date -d @1700000000
# Timestamp to human-readable (macOS/BSD date)
date -r 1700000000
# Human-readable to timestamp (GNU date)
date -d "2025-03-15 14:30:00 UTC" +%s
The Year 2038 Problem
The Y2038 problem is the Unix equivalent of Y2K, and it's real.
What Happens
On January 19, 2038, at 03:14:07 UTC, a 32-bit signed integer storing seconds since the Unix epoch will overflow. The maximum value of a signed 32-bit integer is 2,147,483,647. One second after that, the value wraps to -2,147,483,648, which represents a date in December 1901.
Maximum 32-bit timestamp: 2,147,483,647
Human readable: January 19, 2038, 03:14:07 UTC
One second later: -2,147,483,648
Interpreted as: December 13, 1901, 20:45:52 UTC
Who Is Affected
- Embedded systems with 32-bit processors (IoT devices, industrial controllers, automotive systems)
- Legacy databases storing timestamps as 32-bit integers
- File formats that use 32-bit timestamp fields (some older tar, zip, and ext3 implementations)
- C programs compiled on 32-bit systems using
time_tas a 32-bit value
Who Is Not Affected
- 64-bit systems are safe. A 64-bit timestamp can represent dates up to approximately 292 billion years in the future.
- JavaScript uses 64-bit floats for dates (safe until the year 275,760).
- Java uses 64-bit longs for
Instant.getEpochSecond(). - Python uses arbitrary-precision integers.
- Modern Linux kernels (5.6+) use 64-bit
time_teven on 32-bit architectures.
What You Should Do
If you control the code:
- Use 64-bit integers for storing timestamps
- On 32-bit C/C++ systems, compile with
_TIME_BITS=64(Linux glibc 2.34+) - Avoid storing timestamps as 32-bit integers in databases; use
BIGINTor native datetime types
If you're building new systems in 2024+, you're almost certainly fine. But if you maintain legacy systems, audit your timestamp storage now rather than in 2037.
Negative Timestamps: Dates Before 1970
Unix timestamps can be negative. A negative timestamp represents a moment before the Unix epoch.
-1= December 31, 1969, 23:59:59 UTC-86400= December 31, 1969, 00:00:00 UTC-2208988800= January 1, 1900, 00:00:00 UTC
Most modern languages handle negative timestamps correctly:
// JavaScript
new Date(-86400 * 1000).toISOString()
// "1969-12-31T00:00:00.000Z"
# Python
from datetime import datetime, timezone
datetime.fromtimestamp(-86400, tz=timezone.utc)
# datetime(1969, 12, 31, 0, 0, tzinfo=timezone.utc)
Gotcha: Some systems, particularly Windows-based ones and older PHP versions, don't support negative timestamps. If your application needs to handle dates before 1970, test your specific runtime environment.
Test negative timestamps with our converter.
Timestamps vs ISO 8601 vs RFC 2822
There are three dominant ways to represent dates in software. Here's when to use each.
Unix Timestamp
- Format:
1700000000 - Best for: Storage, computation, comparison, database columns, API fields where both sides are machines
- Pros: Compact, unambiguous, timezone-free, trivially comparable (just compare integers), sorts correctly
- Cons: Not human-readable, no built-in timezone info for display
ISO 8601
- Format:
2023-11-14T22:13:20Zor2023-11-14T22:13:20+00:00 - Best for: API responses, logs, JSON payloads where humans might read the data, interchange between systems
- Pros: Human-readable, widely supported, includes timezone info, sortable as strings
- Cons: Larger than a timestamp, parsing is more complex, multiple valid representations of the same moment
RFC 2822
- Format:
Tue, 14 Nov 2023 22:13:20 +0000 - Best for: Email headers, HTTP headers (though HTTP uses a subset), legacy systems
- Pros: Very human-readable, includes day of week
- Cons: Verbose, not easily sortable, day-of-week is redundant information
Comparison Table
| Property | Unix Timestamp | ISO 8601 | RFC 2822 |
|---|---|---|---|
| Human-readable | No | Yes | Yes |
| Sortable | Yes (numeric) | Yes (string) | No |
| Size (bytes) | 10 | 20-25 | 29-31 |
| Timezone info | Implicit UTC | Explicit | Explicit |
| Sub-second precision | Via ms/us/ns variants | Yes (.123Z) | No |
| Comparison | Integer comparison | String comparison | Parse first |
Convert between all three formats with the Date Formatter.
Practical Recommendation
Use Unix timestamps (or millisecond timestamps) for:
- Database storage
- Internal APIs between your own services
- Log indexing and time-series data
- Any field where you'll do math on dates (durations, ranges, comparisons)
Use ISO 8601 for:
- Public APIs
- JSON payloads that humans might inspect
- Configuration files
- Anything displayed in debugging tools
Use RFC 2822 only when a spec requires it (email, certain HTTP headers).
Common Gotchas
These are the timestamp bugs that show up in production. Learn them here so you don't learn them at 2 AM.
1. Timezone Confusion
The single most common timestamp bug: treating a timestamp as if it's in a local timezone.
// WRONG: This interprets the timestamp as local time in some contexts
const date = new Date(1700000000 * 1000);
console.log(date.toString());
// "Tue Nov 14 2023 17:13:20 GMT-0500" (if your machine is EST)
// RIGHT: Use UTC methods for unambiguous output
console.log(date.toISOString());
// "2023-11-14T22:13:20.000Z"
A Unix timestamp is always UTC. When you convert it to a local time for display, make the timezone conversion explicit. Never assume the viewer's timezone matches the server's timezone.
2. Seconds vs Milliseconds Mix-Up
If your date shows up as January 1970 or the year 55,000, you've mixed up seconds and milliseconds.
// WRONG: Passing seconds where milliseconds are expected
new Date(1700000000)
// "Mon Jan 19 1970 ..." (way too early)
// RIGHT: Multiply by 1000
new Date(1700000000 * 1000)
// "Tue Nov 14 2023 ..."
Rule of thumb: Count the digits. 10 digits = seconds. 13 digits = milliseconds.
3. Daylight Saving Time
DST doesn't affect Unix timestamps directly (they're UTC), but it affects conversions to and from local times.
from datetime import datetime
from zoneinfo import ZoneInfo
# This timestamp is during Eastern Daylight Time
ts1 = 1700000000 # Nov 14, 2023 — EDT (UTC-4)
dt1 = datetime.fromtimestamp(ts1, tz=ZoneInfo("America/New_York"))
print(dt1.utcoffset()) # -4:00
# This timestamp is during Eastern Standard Time
ts2 = 1704067200 # Jan 1, 2024 — EST (UTC-5)
dt2 = datetime.fromtimestamp(ts2, tz=ZoneInfo("America/New_York"))
print(dt2.utcoffset()) # -5:00
The lesson: never hardcode a UTC offset like -05:00 for a timezone. Use named timezones (like America/New_York) that handle DST transitions automatically.
Test timezone conversions with the TZ Converter.
4. Leap Seconds
Unix time does not count leap seconds. A Unix day is always exactly 86,400 seconds, even though real UTC days occasionally have 86,401 seconds (when a leap second is inserted).
In practice, this means Unix timestamps are not precisely in sync with UTC during the second immediately following a leap second insertion. Most systems handle this by either "smearing" the leap second over a longer period (Google's approach) or by repeating a second.
For the vast majority of applications, leap seconds are irrelevant. If you're building something that needs sub-second synchronization with UTC (satellite navigation, high-frequency trading), you'll need TAI (International Atomic Time) rather than Unix timestamps.
5. Database Timezone Traps
Different databases handle timestamps differently:
- PostgreSQL
TIMESTAMPis timezone-naive.TIMESTAMPTZstores and returns UTC. Always useTIMESTAMPTZ. - MySQL
TIMESTAMPconverts to/from UTC based on the connection timezone.DATETIMEstores exactly what you give it. UseTIMESTAMPfor UTC orDATETIMEwith explicit UTC storage. - SQLite has no native date type. Store timestamps as integers (Unix time) or ISO 8601 text.
- MongoDB
Datestores millisecond timestamps internally.
Best practice: Store all timestamps in UTC. Convert to local time only at the display layer. This is non-negotiable for any application that serves users in multiple timezones.
6. Floating-Point Timestamps
Some languages return timestamps as floating-point numbers (Python's time.time(), for example). Floating-point arithmetic can introduce precision errors:
import time
t = time.time() # e.g., 1700000000.123456
# IEEE 754 double has ~15-17 significant digits
# A timestamp in 2023 uses 10 digits for the integer part,
# leaving only 5-7 digits of sub-second precision
For most applications this is fine. But if you need microsecond or nanosecond precision, use integer representations (multiply by the appropriate factor and store as an integer).
When to Use Timestamps vs Formatted Dates
This is a design question that comes up in every project. Here's a clear framework.
Use Unix Timestamps When:
- Storing dates in a database for computation (indexing, range queries, time-series)
- Passing dates between backend services where both sides understand the format
- Calculating durations or differences between two points in time
- Sorting events chronologically
- Working with time-series data (metrics, logs, events)
Use Formatted Dates (ISO 8601) When:
- Exposing dates in a public API that third parties consume
- Writing dates to log files that humans will read
- Storing dates where readability matters (configuration files, metadata)
- Including timezone information that the consumer needs for display
- Interoperating with systems that expect string-formatted dates
Use Neither (Use a Datetime Object) When:
- Working within a single application — use your language's native datetime type
- Doing complex calendar math (add 3 months, find next Tuesday, business days)
- Handling recurring events or schedules
Frequently Asked Questions
What is epoch time?
Epoch time is another name for a Unix timestamp. The "epoch" refers to the reference point: January 1, 1970, 00:00:00 UTC. When someone says "epoch time" or "Unix epoch," they mean the number of seconds since that moment.
What does timestamp 0 mean?
Timestamp 0 represents January 1, 1970, 00:00:00 UTC, which is the Unix epoch. It's the starting point from which all Unix timestamps are counted. If you see a date showing "January 1, 1970" in your application, it usually means a timestamp variable was accidentally set to 0 or left uninitialized.
Why does my timestamp show January 1, 1970?
This almost always means your timestamp value is 0, null, or undefined, and the system interpreted it as epoch. Check that your timestamp variable is properly initialized and that you're not accidentally passing null to a date constructor.
How do I get the current Unix timestamp?
In a terminal, run date +%s. In JavaScript, use Math.floor(Date.now() / 1000). In Python, use int(time.time()). See the full language reference above for all 9 languages.
Are Unix timestamps always in UTC?
Yes. A Unix timestamp has no timezone. It represents an absolute moment in time, measured from the UTC epoch. When you convert a timestamp to a human-readable date, that's when timezone comes into play.
How do I convert between seconds and milliseconds?
Multiply by 1,000 to go from seconds to milliseconds. Divide by 1,000 (and floor) to go from milliseconds to seconds. Count the digits: 10 = seconds, 13 = milliseconds.
Can Unix timestamps be negative?
Yes. Negative timestamps represent dates before January 1, 1970. For example, -86400 represents December 31, 1969, 00:00:00 UTC. Most modern languages and databases support negative timestamps, though some older systems may not.
What is the maximum Unix timestamp?
For 32-bit systems: 2,147,483,647 (January 19, 2038, 03:14:07 UTC). For 64-bit systems: 9,223,372,036,854,775,807, which is approximately 292 billion years from now. You won't run out.
Is a Unix timestamp the same as an epoch?
Not exactly. "Epoch" refers to the reference point (January 1, 1970). "Timestamp" refers to the number of seconds since that epoch. But in casual usage, "epoch time" and "Unix timestamp" are used interchangeably.
How accurate is a Unix timestamp?
A standard Unix timestamp is accurate to the second. Millisecond timestamps are accurate to 1/1000 of a second. The actual precision depends on your system's clock. Most modern operating systems keep time accurate to within a few milliseconds via NTP (Network Time Protocol), though the timestamp format can represent finer granularity.
Do leap years affect Unix timestamps?
Leap years do not cause any special issues with Unix timestamps. The timestamp is simply counting seconds, regardless of whether those seconds fall in a leap year. The leap day (February 29) just adds 86,400 seconds to that year's total. All the complexity of leap years is handled when converting between timestamps and calendar dates.
Need to convert a timestamp right now? Open the Epoch Converter -- it auto-detects seconds vs milliseconds, shows results in your local timezone, and includes code snippets for your language of choice.