T

h

e

R

e

a

d

y

-

N

o

w

C

l

o

u

d

T

h

e

R

e

a

d

y

-

N

o

w

C

l

o

u

d

T

h

e

R

e

a

d

y

-

N

o

w

C

l

o

u

d

for AI Developers

for AI Developers

for AI Developers

Available now: on-demand or reserved GPU instances and clusters for AI training and inference workloads.

Available now: on-demand or reserved GPU instances and clusters for AI training and inference workloads.

Available now: on-demand or reserved GPU instances and clusters for AI training and inference workloads.

Let’s Work

Let’s Work

Let’s Work

Our Collaborators

Our Collaborators

Enterprise

Secure and stable environments for larger, sensitive workloads.

Researchers

Startups

AI Services

Enterprise

Secure and stable environments for larger, sensitive workloads.

Researchers

Startups

AI Services

Enterprise

Secure and stable environments for larger, sensitive workloads.

Researchers

Startups

AI Services

We own

our clusters

The highest performing NVIDIA GPUs, integrated into a full turnkey VM solution, immediately ready for a proof of concept with our development team.

We own

our clusters

The highest performing NVIDIA GPUs, integrated into a full turnkey VM solution, immediately ready for a proof of concept with our development team.

We own

our clusters

The highest performing NVIDIA GPUs, integrated into a full turnkey VM solution, immediately ready for a proof of concept with our development team.

Enterprise

Researchers

Startups

AI Services

Enterprise

Secure and stable environments for larger, sensitive workloads.

Our Clusters

Sequoia

Available Now

128 HGX H100 nodes

(1024 GPUs)

40 x86 CPU nodes with dual CPU and 128G DIMM

40 NVMe Storage Nodes, PB-level Network Storage

3.2 TiB/s IB or RoCEv2 GPU Interconnects

Cherry

Online March 15

160 HGX H200 nodes

(1280 GPUs)

30 x86 CPU nodes with dual CPU and 128G DIMM

20 NVMe Storage Nodes, PB-level Network Storage

3.2 TiB/s IB or RoCEv2 GPU Interconnects

Baobab

No vacancy

256 HGX H100 nodes

(2048 GPUs)

40 x86 CPU nodes with dual CPU and 128G DIMM

30 Storage servers, 2PB storage

3.2 TiB/s IB or RoCEv2 GPU Interconnects

On-Demand Instances

3-minute spin up for GPU instances, billed hourly. We get you into testing first, then you run your workload. Afterwards, scale with us however you’d like, without long-term commitments.

Private Cloud

Thousands of NVIDIA H100s, H200s, and soon B200s available for reservation, tailored to your needs.

Coming soon

B200s and data center expansion projects in Japan and the United States to scale our inferencing and training workloads.

Get connected

with our founders and sales team

Let’s Work

Horizon Compute

3009 Pasadena Fwy Ste 140

Pasadena, TX 77503

Our Clusters

Our Clusters

Our Clusters

Available Now

Sequoia

128 HGX H100 nodes

(1024 GPUs)

Available Now

Sequoia

128 HGX H100 nodes

(1024 GPUs)

Online March 15

Cherry

160 HGX H200 nodes

(1280 GPUs)

Online March 15

Cherry

160 HGX H200 nodes

(1280 GPUs)

No vacancy

Baobab

256 HGX H100 nodes

(2048 GPUs)

No vacancy

Baobab

256 HGX H100 nodes

(2048 GPUs)

Get connected

with our founders and sales team

Let’s Work

Let’s Work

Horizon Compute

3009 Pasadena Fwy Ste 140

Pasadena, TX 77503

On-Demand Instances

3-minute spin up for GPU instances, billed hourly. We get you into testing first, then you run your workload. Afterwards, scale with us however you’d like, without long-term commitments.

On-Demand Instances

3-minute spin up for GPU instances, billed hourly. We get you into testing first, then you run your workload. Afterwards, scale with us however you’d like, without long-term commitments.

On-Demand Instances

3-minute spin up for GPU instances, billed hourly. We get you into testing first, then you run your workload. Afterwards, scale with us however you’d like, without long-term commitments.

Get connected

with our founders and sales team

Let’s Work

Let’s Work

Horizon Compute

3009 Pasadena Fwy Ste 140

Pasadena, TX 77503