Total Data Size
60 PBData Since
2012Service Downtime since Inception
ZEROOur data, technology and services are the go-to for regulatory bodies, banks, funds, service providers, and other institutions including:
We provide a full spectrum of access methods and latency profiles for low-latency market data. For ultra low-latency users, we provide application and compute hosting in our colocation infrastructure.
Restful API & WebSocket
One standardized API to access the entire universes of US equities, Options, Futures and more, stream real-time prices, retrieve historical data, company information, news, and more.
TCP Socket
High-speed real-time streaming coming directly out of algoseek third-generation ticker plant. Good for latency sensitive applications, such as high-frequency trading & arbitraging.
Kafka
Guaranteed delivery of every messages even at the peak of market volume. Good for applications sensitive to repeating messages and packet loss, such as real-time data analytics.
We strive to make real-time and historical financial data affordable and accessible for everyone. Whether you're backtesting strategies, conducting research, or building your next financial application, our delivery platform offers the access technologies and formats that suit you best.
SQL Query
With our on-demand delivery platform, you can choose from a diverse array of file formats, including CSV, JSON, Parquet, and binary. Whether you prefer AWS S3, SFTP, Dropbox, email, Snowflake, or more — simply specify your destination and we'll seamlessly deliver your data files.
Download
Other than simple query via Restful API, our proprietary database supports full SQL and lets you perform complex query and computation with extraordinary speed and ease.
Cloud-based Services
Access algoseek Console or download Excel & Google Sheet plug-ins to immediately work with cloud data. No need to download data or install applications.
Our proprietary ticker plant has been designed to withstand the most extreme market conditions. Since our inception in 2015, we have successfully navigated through multiple times volume explosions with zero service downtime and zero data loss.
For each exchange data feed, we use 2 fully segregated networks and 2 sets of servers to receive and consume the data to prevent interruption in the event of hardware failures.
We store normalized data in (1)row-based ticker plant database to support ultra-low latency query and feed replay, and (2) columnar ArdaDB for full SQL capability in sub-second level latency. Additionally, we keep a copy of raw PCAPs.
We cross examine minute by minute our OHLC and volume between independently received and calculated by primary and backup servers to ensure data accuracy.