Rendered at 03:44:54 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
ivraatiems 7 hours ago [-]
There's some irony in the fact that this website reads as extremely NOT AI-generated, very human in the way it's designed and the tone of its writing.
Still, this is a great idea, and one I hope takes off. I think there's a good argument that the future of AI is in locally-trained models for everyone, rather than relying on a big company's own model.
One thought: The ability to conveniently get this onto a 240v circuit would be nice. Having to find two different 120v circuits to plug this into will be a pain for many folks.
solarkraft 5 hours ago [-]
I find that the most respected writing about AI has very few signs of being written by AI. I'm guessing that's because people in the space are very sensitive to the signs and signal vs. noise.
rimeice 3 hours ago [-]
And because people writing anything worth reading are using the process of writing to form a proper argument and develop their ideas. It’s just not possible to do that by delegating even a small chunk of the work to AI.
Aperocky 5 hours ago [-]
I found it useful to preface with
* this section written by me typing on keyboard *
* this section produced by AI *
And usually both exist in document and lengthy communications. This gets what I wanted across with exactly my intention and then I can attach 10x length worth of AI appendix that would be helpful indexing and references.
jolmg 2 hours ago [-]
> attach 10x length worth of AI appendix that would be helpful indexing and references.
Are references helpful when they're generated? The reader could've generated them themselves. References would be helpful if they were personal references of stuff you actually read and curated. The value then would be getting your taste. References from an AI may well be good-looking nonsense.
jofzar 1 hours ago [-]
Good? That's what I want out of all websites. I don't want to read what an AI believes is the best thing for a website, I want to know the honest truth.
adrianwaj 2 hours ago [-]
"locally-trained models for everyone"
Wouldn't there be a massive duplication of effort in that case? It'll be interesting to see how the costs play out. There are security benefits to think about as well in keeping things local-first.
all2 19 minutes ago [-]
There are multiple efforts for 'folding at home' but for AI models at this point. I get the impression that we will see a frontier model released this year built on a system like this.
Lerc 7 hours ago [-]
I am a little surprised that they openly solicit code contributions with "Invest with your PRs" but don't have any statement on AI contributions.
Maybe the volume for them is ok that well-intentioned but poor quality PRs can be politely(or otherwise, culture depending) disregarded and the method of generation is not important.
KeplerBoy 6 hours ago [-]
Tinygrad sure shared a few opinions on AI PRs on Twitter. I believe the gist was "we have Claude code as well, if that's all you bring don't bother".
all2 18 minutes ago [-]
That's a pretty excellent take, IMO. Just an undirected AI model doesn't do much, especially when the core team has time with the code, domain expertise, _and_ Claude.
cyanydeez 6 hours ago [-]
I'm starting to think that if you have an AI repo thats basically about codegen, you should just close all issues automatically, the manually (or whatever) open the ones you/maintainers actually care about. Thats about the only way to kill some of the signal/noise ratio AIs are creating.
Then you could focus fire, like the script kiddies did with DDoS in the old days on fixing whatever preferred issues you have.
wat10000 7 hours ago [-]
If you’re spending $65,000 on this thing, needing two circuits seems like a minor problem
ycui1986 3 hours ago [-]
they could had gone with the Max-Q version RTX PRO 6000 and only require 120V circuit. 10% performance hit, but half the power.
fundamentally, looks like they are shipping consumer off-the-shelf hardwares in a custom box.
ericd 2 hours ago [-]
Yeah, the other big benefit is that the Max-Q's have blowers that exhaust the hot air out of the box, the workstation cards would each blow their exhaust straight into the intake of the card behind it. The last card in that chain would be cooking, as the air has already been heated up by 1800W, essentially a hair dryer on high.
Or could be the server edition 6000s that just have a heatsink and rely on the case to drive air through them, those are 600W cards.
ivraatiems 6 hours ago [-]
The $12,000 one also requires it.
knollimar 4 hours ago [-]
Easier to get two circuits than rewire a breaker in an office you might be renting, no?
(I work for an electrical contractor so my sense of ease might be overcorrecting)
markdown 2 hours ago [-]
And 240v is orders of magnitude more common worldwide than 120v
wat10000 4 hours ago [-]
The specs show that it only has one PSU. The docs just say that it has 2 and thus needs two circuits, but I’d guess that was meant to be for the more expensive one.
isatty 6 hours ago [-]
Surprisingly affordable but I’m not really interested in the 9070XT.
If it shipped with like 4090+ (for a higher price) it’d be more tempting.
dmarcos 6 hours ago [-]
They offered a version a few months ago with 4x5090 for 25k
9070XT provide roughly same inference performance at double the power, half the cost, as RTX PRO 4500. So this one is optimized for total BOM cost.
3 hours ago [-]
trollbridge 7 hours ago [-]
A typical U.S. 240V circuit is actually just two 120V circuits. Fairly trivial to rewire for that.
Salgat 5 hours ago [-]
It's more accurate to say that the typical 120V circuit is just a 240V source with the neutral tapped into the midpoint of the transformer winding.
reactordev 5 hours ago [-]
This. It definitely comes in at a higher voltage.
amluto 1 hours ago [-]
Sort of? It’s 120V RMS to ground.
0xbadcafebee 3 hours ago [-]
I think you're forgetting the wires? If you have one outlet with a 15-20A 120V circuit, then the wiring is almost certainly rated for 15-20A. If you just "combined" two 120V circuits into a 240V circuit, you still need an outlet that is rated for 30A, the wires leading to it also need to be rated for 30A, and it definitely needs a neutral. So you still need a new wire run if you don't have two 120V circuits right where you wanna plug in the box. To pass code you also may need to upsize conduit. If load is continuously near peak, it should be 50A instead of 30.
So basically you need a brand new circuit run if you don't have two 120V circuits next to each other. But if you're spending $65k on a single machine, an extra grand for an electrician to run conduit should be peanuts. While you're at it I would def add a whole-home GFCI, lightning/EMI arrestor, and a UPS at the outlet, so one big shock doesn't send $65k down the toilet.
briandw 2 hours ago [-]
Correct me if I’m wrong, but doubling the volts doesn't change the amps, it doubles the watts. Watts = V*A.
subscribed 2 hours ago [-]
Doubling the volts halves the amps. P = I * V indeed.
fc417fc802 3 hours ago [-]
I think you might've misread GP. (Or maybe I did?)
He's not saying you would use it as two separate 120v circuits sharing a ground but rather as a single 240v circuit. His point is that it's easy to rewire for 240v since it's the same as all the other wiring in your house just with both poles exposed.
Of course you do have to run a new wire rather than repurpose what's already in the wall since you need the entire circuit to yourself. So I think it's not as trivial as he's making out.
But then at that wattage you'll also want to punch an exhaust fan in for waste heat so it's not like you won't already be making some modifications.
jcgrillo 6 hours ago [-]
If you actually use two 120V circuits that way and one breaker flips the other half will send 120V through the load back into the other circuit. So while that circuit's breaker is flipped it is still live. Very bad. Much better to use a 240V breaker that picks up two rails in the panel.
amluto 1 hours ago [-]
I assume the device has two separate PSUs, each of which accepts 120-240V, and neither of which will backfeed its supply.
ycui1986 3 hours ago [-]
i am guessing, without any proof, that, when one breaker fails the server lose it all, or loose two GPUs, depending on whether one connected to the cpu side failed.
fc417fc802 2 hours ago [-]
GPUs aren't electrically isolated from the motherboard though. An entire computer is a single unified power domain.
The only place where there's isolation is stuff like USB ports to avoid dangerous ground loop currents.
That said I believe the PSU itself provides full isolation and won't backfeed so using two on separate circuits should (maybe?) be safe. Although if one circuit tripped the other PSU would immediately be way over capacity. Hopefully that doesn't cause an extended brownout before the second one disables itself.
doubled112 7 hours ago [-]
I’ve actually had half of my dryer outlet fail when half of the breaker failed.
Can confirm.
amluto 6 hours ago [-]
Sometimes. 240V circuits may or may not have a neutral.
roarcher 49 minutes ago [-]
> In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. If you aren't capable of ordering through the website, I'm sorry but we won't be able to help.
Has this guy never worked on a B2B product before? Nobody is going to order a $10 million piece of infrastructure through your website's order form. And they are definitely going to want to negotiate something, even if it's just a warranty. And you'll do it because they're waving a $10 million check in your face.
The tone of this website is arrogant to the point of being almost hostile. The guy behind this seems to think that his name carries enough weight to dictate terms like this, among other things like requiring candidates to have already contributed to the project to even be considered for a job. I would be extremely surprised if anyone except him thinks he's that important.
jen729w 24 minutes ago [-]
Your framing of this section is misleading. On the site it's preceded by a FAQ-style 'question':
> Can you fill out this supplier onboarding form?
That's very important context, as anyone who has been asked to fill out a supplier onboarding form (hi) will attest.
wmf 45 minutes ago [-]
He's not actually selling the exabox yet. It sounds like he put up a hypothetical config to see if anyone is interested.
jrflowers 47 minutes ago [-]
I imagine that the FAQ might get updated when there’s actually a $10M machine for sale
roarcher 42 minutes ago [-]
Maybe. Frankly I'd be very surprised if any business ordered a $65k machine that way either.
jee599 9 minutes ago [-]
The 120B parameter sweet spot is interesting — big enough for most production tasks but small enough to fit on consumer-grade hardware. The real question is inference latency under sustained load. For batch processing this looks compelling, but for interactive use cases (agents making tool calls, real-time code generation) you're still bottlenecked by tokens/sec at that model size. Curious what the actual throughput numbers look like running something like a 70B quantized model with continuous batching.
11 minutes ago [-]
mellosouls 51 minutes ago [-]
Where is the 120B documented? This seems to be an editorialized title.
Edit: found a third party referencing the claim but it doesn't belong in the title here I think:
Meet the World’s Smallest ‘Supercomputer’ from Tiiny AI; A Machine Bold Enough to Run 120B AI Models Right in the Palm of Your Hand
That third party link is from a different company (Tiiny with an extra i)
Now I'm wondering if the HN title was submitted by some AI bot that couldn't tell the difference.
bastawhiz 7 hours ago [-]
There's no way the red v2 is doing anything with a 120b parameter model. I just finished building a dual a100 ai homelab (80gb vram combined with nvlink). Similar stats otherwise. 120b only fits with very heavy quantization, enough to make the model schizophrenic in my experience. And there's no room for kv, so you'll OOM around 4k of context.
I'm running a 70b model now that's okay, but it's still fairly tight. And I've got 16gb more vram then the red v2.
I'm also confused why this is 12U. My whole rig is 4u.
The green v2 has better GPUs. But for $65k, I'd expect a much better CPU and 256gb of RAM. It's not like a threadripper 7000 is going to break the bank.
I'm glad this exists but it's... honestly pretty perplexing
Aurornis 1 hours ago [-]
> There's no way the red v2 is doing anything with a 120b parameter model.
I don't see the 120B claim on the page itself. Unless the page has been edited, I think it's something the submitter added.
I agree, though. The only way you're running 120B models on that device is either extreme quantization or by offloading layers to the CPU. Neither will be a good experience.
These aren't a good value buy unless you compare them to fully supported offerings from the big players.
It's going to be hard to target a market where most people know they can put together the exact same system for thousands of dollars less and have it assembled in an afternoon. RTX 6000 96GB cards are in stock at Newegg for $9000 right now which leaves almost $30,000 for the rest of the system. Even with today's RAM prices it's not hard to do better than that CPU and 256GB of RAM when you have a $30,000 budget.
oceanplexian 7 hours ago [-]
It will work fine but it’s not necessarily insane performance. I can run a q4 of gpt-oss-120b on my Epyc Milan box that has similar specs and get something like 30-50 Tok/sec by splitting it across RAM and GPU.
The thing that’s less useful is the 64G VRAM/128G System RAM config, even the large MoE models only need 20B for the router, the rest of the VRAM is essentially wasted (Mixing experts between VRAM and/System RAM has basically no performance benefit).
syntaxing 5 hours ago [-]
Split RAM and GPU impacts it more than you think. I would be surprised if the red box doesn’t outperform you by 2-3X for both PP and TG
overfeed 5 hours ago [-]
> I'm also confused why this is 12U. My whole rig is 4u.
I imagine that's because they are buying a single SKU for the shell/case. I imagine their answer to your question would be: In order to keep prices low and quality high, we don't offer any customization to the server dimensions
ottah 3 hours ago [-]
That's just such a massively oversized server for the number of gpus. It's not like they're doing anything special either. I can buy an appropriately sized supermicro chassis myself and throw some cards in it. They're really not adding enough value add to overspend on anything.
ottah 3 hours ago [-]
Honestly two rtx 8000s would probably have a better return on investment than the red v2. I have an eight gpu server, five rtx 8000, three rtx 6000 ada. For basic inference, the 8000s aren't bad at all. I'm sure the green with four rtx pro 6000s are dramatically faster, but there's a $25k markup I don't honestly understand.
ericd 4 hours ago [-]
Was that cheaper than a Blackwell 6000?
But yeah, 4x Blackwell 6000s are ~32-36k, not sure where the other $30k is going.
segmondy 3 hours ago [-]
folks have too much money than sense, gpt-oss-120b full quant runs on my quad 3090 at 100tk/sec and that's with llama.cpp, with vllm it will probably run at 150tk/sec and that's without batching.
amarshall 2 hours ago [-]
You're almost certainly (definitely, in fact) confusing the 120b and 20b models.
Aurornis 58 minutes ago [-]
> gpt-oss-120b full quant runs on my quad 3090
A 120B model cannot fit on 4 x 24GB GPUs at full quantization.
Either you're confusing this with the 20B model, or you have 48GB modded 3090s.
ericd 2 hours ago [-]
How're you fitting a model made for 80 gig cards onto a GPU with 24 gigs at full quant?
zozbot234 2 hours ago [-]
MoE layers offload to CPU inference is the easiest way, though a bit of a drag on performance
ericd 2 hours ago [-]
Yeah, I'd just be pretty surprised if they were getting 100 tokens/sec that way.
EDIT: Either they edited that to say "quad 3090s", or I just missed it the first time.
bastawhiz 3 hours ago [-]
I bought the A100s used for a little over $6k each.
ericd 2 hours ago [-]
Oh, why'd you go that route? Considering going beyond 80 gigs with nvlink or something?
zozbot234 7 hours ago [-]
> And there's no room for kv, so you'll OOM around 4k of context.
Can't you offload KV to system RAM, or even storage? It would make it possible to run with longer contexts, even with some overhead. AIUI, local AI frameworks include support for caching some of the KV in VRAM, using a LRU policy, so the overhead would be tolerable.
tcdent 6 hours ago [-]
Not worth it. It is a very significant performance hit.
With that said, people are trying to extend VRAM into system RAM or even NVMe storage, but as soon as you hit the PCI bus with the high bandwidth layers like KV cache, you eliminate a lot of the performance benefit that you get from having fast memory near the GPU die.
zozbot234 5 hours ago [-]
> With that said, people are trying to extend VRAM into system RAM or even NVMe storage
Only useful for prefill (given the usual discrete-GPU setup; iGPU/APU/unified memory is different and can basically be treated as VRAM-only, though a bit slower) since the PCIe bus becomes a severe bottleneck otherwise as soon as you offload more than a tiny fraction of the memory workload to system memory/NVMe. For decode, you're better off running entire layers (including expert layers) on the CPU, which local AI frameworks support out of the box. (CPU-run layers can in turn offload to storage for model parameters/KV cache as a last resort. But if you offload too much to storage (insufficient RAM cache) that then dominates the overhead and basically everything else becomes irrelevant.)"
bastawhiz 3 hours ago [-]
The performance already isn't spectacular with it running all in vram. It'll obviously depend on the model: MoE will probably perform better than a dense model, and anything with reasoning is going to take _forever_ to even start beginning its actual output.
ranger_danger 6 hours ago [-]
I know llama.cpp can, it certainly improved performance on my RAM-starved GPU.
vessenes 7 hours ago [-]
The exabox is interesting. I wonder who the customer is; after watching the Vera Rubin launch, I cannot imagine deciding I wanted to compete with NVIDIA for hyperscale business right now. Maybe it’s aiming at a value-conscious buyer? Maybe it’s a sensible buy for a (relatively) cash-strapped ML startup; actually I just checked prices, and it looks like Vera Rubin costs half for a similar amount of GPU RAM. I’m certain that the interconnect will not be as good as NV’s.
I have no idea who would buy this. Maybe if you think Vera Rubin is three years out? But NV ships, man, they are shipping.
kulahan 6 hours ago [-]
Sometimes you can compete with the big boys simply because they built their infra 5+ years ago and it’s not economically viable for them to upgrade yet, because it’s a multi-billion dollar process for them. They can run a deficit to run you out of the business, but if you’re taking less than 0.01% of their business, I doubt they’d give a crap.
zozbot234 7 hours ago [-]
> The exabox is interesting.
Can it run Crysis?
WithinReason 6 hours ago [-]
Only gamers understand that reference
-- Jensen Huang
bastawhiz 7 hours ago [-]
Probably, the rdna5 can do graphics. But it would be a huge waste, since you could probably only use one of the 720 GPUs
dist-epoch 6 hours ago [-]
Yes, it can generate Crysis with diffusion models at 60 fps.
alexfromapex 3 hours ago [-]
$12,000 for the base model is insane. I have an Apple M3 Max with 128GB RAM that can run 120B parameter models using like 80 watts of electricity at about 15-20 tokens/sec. It's not amazing for 120B parameter models but it's also not 12 grand.
Thaxll 3 hours ago [-]
M3 max tflops is tiny compared to the 12k box. It's not even comparable.
zozbot234 3 hours ago [-]
M3 has tolerable decode performance for the price, and that's what people would care about most of the time. they underperform severely wrt. prefill, but that's a fraction of the workload. AI, even agentic AI, spends most of its time outputing tokens, not processing context in bulk.
segmondy 3 hours ago [-]
it's for fools. i bought 160gb of vram for $1000 last year. 96gb of p40 VRAM can be had for under $1000. And it will run gpt-oss-120b Q8 at probably 30tk/sec
timschmidt 3 hours ago [-]
P40 is Tesla architecture which is no longer receiving driver or CUDA updates. And only available as used hardware. Fine for hobbyists, startups, and home labs, but there is likely a growing market of businesses too large to depend on used gear from ebay, but too small for a full rack solution from Nvidia. Seems like that's who they're targeting.
segmondy 2 hours ago [-]
99% of interest is in inference. If you want to fine-tune a model, just rent the best gpu in the cloud. It's often cheaper and faster.
timschmidt 2 hours ago [-]
Great option if you don't mind sharing your data with the cloud. Some businesses want to own the hardware their data resides on.
cootsnuck 2 hours ago [-]
How many businesses have the capabilities and expertise to train their own models?
timschmidt 2 hours ago [-]
No idea. Probably more every day.
segmondy 2 hours ago [-]
renting GPU, how is that sharing data with the cloud? you can rent GPU from GCP or AWS
timschmidt 1 hours ago [-]
I suppose if I rent a cloud GPU and just let it sit there dark and do nothing then I wouldn't have to move any data to it. Otherwise, I'm uploading some kind of work for it to do. And that usually involves some data to operate on. Even if it's just prompts.
jmspring 27 minutes ago [-]
Tinygrad devices are interesting, I wish I have screen captures - but their prices have gone up and some specs like RAM have gone down.
A single box with those specs without having to build/configure (the red and green) - I could see being useful if you had $ and not time to build/configure/etc yourself.
They answered your question with a pretty specific uptime target. Calling it a dodge and then moving the goalposts with a new question as your follow up doesn’t speak to you acting in good faith.
scratchyone 43 minutes ago [-]
tbh they really didn't, tinygrad's was clearly a joke response. they were not providing a real uptime target.
hmokiguess 4 hours ago [-]
Is this like the new equivalent of crypto mining? I remember the early days when they would sell hardware for farming crypto, now it’s AI?
latchkey 4 hours ago [-]
Kind of yes, except there is no block reward.
siliconc0w 5 hours ago [-]
Tinybox is cool but I think the market is maybe looking more for a turn-key explicit promise of some level of intelligence @ a certain Tok/s like "Kimi 2.5 at 50Tok/s".
alasdair_ 1 hours ago [-]
I just don’t believe that this can run inference on a 120 billion parameter model at actually useful speeds.
Obviously any Turing machine can run any size of model, so the “120B” claim doesn’t mean much - what actually matters is speed and I just don’t believe this can be speedy enough on models that my $5000 5090-based pc is too slow for and lacks enough vram for.
mnkyprskbd 1 hours ago [-]
Look at the GPU and RAM spec; 120b seems workable.
Aurornis 1 hours ago [-]
For the red v2?
120B could run, but I wouldn't want to be the person who had to use it for anything.
To be fair, the 120B claim doesn't appear on the webpage. I don't know where it came from, other than the person who submitted this to HN
mnkyprskbd 1 hours ago [-]
It is more than fair, also, you're comparing your 5k devices to 12k and more importantly 65k and >10m devices.
Aurornis 54 minutes ago [-]
The "to be fair" part of my comment was saying that the tinygrad website doesn't claim 120B.
Also nobody is comparing this box to an $10M nVidia rack scale deployment. They're comparing it to putting all of the same parts into their Newegg basket and putting it together themself.
ekropotin 7 hours ago [-]
IDK, I feel it’s quite overpriced, even with the current component prices.
I almost sure it’s possible to custom build a machine as powerful as their red v2 within 9k budget. And have a lot of fun along the way.
lostmsu 7 hours ago [-]
AMD now has 32 GiB Radeon AI Pro 9700. 4 of these (just under 2k each) would put you at 128 GiB VRAM
ekropotin 6 hours ago [-]
VRAM is not everything - GPU cores also matter (a lot) for inference
lostmsu 6 hours ago [-]
4x Radeon will have significantly more GPU power than say Mac Studio or DGX Spark.
cyanydeez 5 hours ago [-]
inference speed is like monitor Hz; sure, you go from 60 to 120Hz and thats noticeable, but unless your model is AGI, at some point you're just generating more code than you'll ever realistically be able to control, audit and rely on.
So, context is probably more $/programming worth than inference speed.
paxys 5 hours ago [-]
The problem with all these "AI box" startups is that the product is too expensive for hobbyists, and companies that need to run workloads at scale can always build their own servers and racks and save on the markup (which is substantial). Unless someone can figure out how to get cheaper GPUs & RAM there is really no margin left to squeeze out.
nine_k 4 hours ago [-]
Would a hedge fund that does not want to trust to a public AI cloud just buy chassis, mobos, GPUs, etc, and build an equivalent themselves? I suspect they value their time differently.
kkralev 4 hours ago [-]
i think the real gap isnt at the high end tho. theres a whole segment of people who just want to run a 7-8b model locally for personal use without dealing with cloud APIs or sending their data somewhere. you dont need 4 GPUs for that, a jetson or even a mini pc with decent RAM handles it fine. the $12k+ market feels like it's chasing a different customer than the one who actually cares about offline/private AI
wmf 3 hours ago [-]
just want to run a 7-8b model locally
This is already solved by running LM Studio on a normal computer.
zozbot234 3 hours ago [-]
Ollama or llama.cpp are also common alternatives. But a 8B model isn't going to have much real-world knowledge or be highly reliable for agentic workloads, so it makes sense that people will want more than that.
zach_vantio 1 hours ago [-]
the compute density is insane. but giving a 70B model actual write access locally for agentic workloads is a massive liability. they still hallucinate too much. raw compute without strict state control is basically just a blast radius waiting to happen.
ks2048 3 hours ago [-]
"... and likely the best performance/$".
"likely" doesn't inspire much confidence. Surely, they have those numbers, and if it was, they'd publicize the comparisons.
mmoustafa 6 hours ago [-]
I would love to see real-life tokens/sec values advertised for one or various specific open source models.
I'm currently shopping for offline hardware and it is very hard to estimate the performance I will get before dropping $12K, and would love to have a baseline that I can at least always get e.g. 40 tok/s running GPT-OSS-120B using Ollama on Ubuntu out of the box.
hpcjoe 5 hours ago [-]
Look for llmfit on github. This will help with that analysis. I've found it reasonably accurate. If you have Ollama already installed, it can download the relevant models directly.
adrianwaj 5 hours ago [-]
Perhaps this company should think about acting as a landlord for their hardware. You buy (or lease) but they also offer colocation hosting. They could partner with crypto miners who are transitioning to AI factories to find the space and power to do this. I wonder if the machines require added cooling, though, in what would otherwise be a crypto mining center. CoreWeave made the transition and also do colocation. The switchover is real.
I think Tinygrad should think about recycling. Are they planning ahead in this regard? Is anyone?
My thought is if there was a central database of who own what and where, at least when the recycling tech become available, people will know where to source their specific trash (and even pay for it.) Having a database like that in the first place could even fuel the industry.
operatingthetan 7 hours ago [-]
The incremental price increases between products is funny.
$12,000, $65,000, $10,000,000.
znpy 7 hours ago [-]
I was more worried by the 600kW power requirement... that's 200 houses at full load (3kw) in southern europe... which likely means 400 houses at half load.
the town near my hometown has 650 – 800 houses (according to chatgpt).
crazy.
nine_k 3 hours ago [-]
Or it's two 300kW fast EV chargers working together.
A typical home just consumes rather little energy, now that LED lighting and heat pump cooling / heating became the norm.
ericd 3 hours ago [-]
That’s surprising, 200 amp 240v service is pretty common in the US.
dist-epoch 6 hours ago [-]
Your hometown also has public lightning, water pumps, and probably some other stuff.
sudo_cowsay 7 hours ago [-]
I mean the difference in performance is quite big too. However, the 10,000,000 is a little bit too much (imo).
SmartestUnknown 4 hours ago [-]
Regarding 2x faster than pytorch being a condition for tinygrad to come out of alpha:
Can they/someone else give more details as to what workloads pytorch is more than 2x slower than the hardware provides? Most of the papers use standard components and I assume pytorch is already pretty performant at implementing them at 50+% of extractable performance from typical GPUs.
If they mean more esoteric stuff that requires writing custom kernels to get good performance out of the chips, then that's a different issue.
mciancia 4 hours ago [-]
Not sure why they stopped using 6 GPUs in thei builds - with 4 GPUs, both 9070 and rtx6000 come in 2 slot designs, so it easy to build it yourself using a bit more expensive, but still fairly regular motherboard.
With 6 GPUs you have to deal with risers, pcie retimers, dual PSUs and custom case for so value proposition there was much better IMO
comrade1234 7 hours ago [-]
Cool that you have a dual power supply model. It says rack mountable or free standing. Does that mean two form factors? $65K is more than we can afford right now but we are definitely eventually in the market for something we can run in our own colo.
It's funny though... we're using deepseek now for features in our service and based on our customer-type we thought that they would be completely against sending their data to a third-party. We thought we'd have to do everything locally. But they seem ok with deepseek which is practically free. And the few customers that still worry about privacy may not justify such a high price point.
hrmtst93837 7 hours ago [-]
Most privacy talk folds on contact with a quote. Latency and convenience beat philosophy fast once someone wants a dashboard next week, and a lot of "data sensitivity" talk is just the corporate version of buying "organic" food until the price tag shows up.
If private inference is actually non-negotiable, then sure, put GPUs in your colo and enjoy the infra pain, vendor weirdness, and the meeting where finance learns what those power numbers meant.
zozbot234 6 hours ago [-]
The real case for private inference is not "organic", it's "slow food". Offering slow-but-cheap inference is an afterthought for the big model providers, e.g. OpenRouter doesn't support it, not even as a way of redirecting to existing "batched inference" offerings. This is a natural opening for local AI.
selectodude 6 hours ago [-]
But how slow is too slow (faster than you’d think) and even then, you’re in for $25,000 for even the most basic on-premise slow LLM.
aplomb1026 6 hours ago [-]
[dead]
wongarsu 7 hours ago [-]
Sound like solid prebuilt with well balanced components and a pretty case
Not revolutionary in any way, but nice. Unless I'm missing something here?
eurekin 7 hours ago [-]
It's pretty close to what people have been frankenbuilding on r/LocaLLaMa... It's nice to have a prebuild option.
speedgoose 7 hours ago [-]
You could also order such configurations from a classic server reseller as far as I know. The case is a bit original there.
nextlevelwizard 7 hours ago [-]
Tiny boxes are already several years old IIRC
llbbdd 3 hours ago [-]
If you wanted a box built by geohot, most recently known for signing on to Elons Twitter and then bailing, it's for you
He's an interesting guy. Seems to be one who does things the way he thinks is right, regardless of corporate profits.
ilaksh 6 hours ago [-]
I thought the most interesting thing about tinygrad was that theoretically you could render a model all the way into hardware similar to Taalas (tinygrad might be where Taalas got the idea for all I know).
I could swear I filed a GitHub issue asking about the plans for that but I don't see it. Anyway I think he mentioned it when explaining tinygrad at one point and I have wondered why that hasn't got more attention.
As far as boxes, I wish that there were more MI355X available for normal hourly rental. Or any.
mayukh 7 hours ago [-]
What’s the most effective ~$5k setup today? Interested in what people are actually running.
emidoots 5 hours ago [-]
At $7.2k + tax:
* RAM - $1500 - Crucial Pro 128GB Kit (2x64GB) DDR5 RAM, 5600MHz CP2K64G56C46U5, up to 4 sticks for 128GB or 256GB, Amazon
* Fans - $100 - 6x 120mm fans, 1x 140mm fan, of your choice
Look into models like Qwen 3.5
cmxch 2 hours ago [-]
Surprised to see X3D given the reports of failures. I’ve opted for a regular 9900x and X670E-E just to have a bit more assurance.
EliasWatson 7 hours ago [-]
The DGX Spark is probably the best bang for your buck at $4k. It's slower than my 4090 but 128gb of GPU-usable memory is hard to find anywhere else at that price. It being an ARM processor does make it harder to install random AI projects off of GitHub because many niche Python packages don't provide ARM builds (Claude Code usually can figure out how to get things running). But all the popular local AI tools work fine out of the box and PyTorch works great.
BobbyJo 7 hours ago [-]
Depends. If token speed isn't a big deal, then I think strix halo boxes are the meta right now, or Mac studios.
If you need speed, I think most people wind up with something like a gaming PC with a couple 3090 or 4090s in it.
Depending on the kinds of models you run (sparse moe or other), one or the other may work better.
bensyverson 7 hours ago [-]
Sadly $5k is sort of a no-man's land between "can run decent small models" and "can run SOTA local models" ($10k and above). It's basically the difference between the 128GB and 512GB Mac Studio (at least, back when it was still available).
cco 5 hours ago [-]
Biggest Mac Studio you can get. The DGX Spark may be better for some workflows but since you're interested in price, the Mac will maintain it's value far longer than the Spark so you'll get more of your money out of it.
zozbot234 6 hours ago [-]
> What’s the most effective ~$5k setup today?
Mac Studio or Mac Mini, depending on which gives you the highest amount of unified memory for ~$5k.
Machines with the 4xx chips are coming next month so maybe wait a week or two.
It's soldered LPDDR5X with amd strix halo ... sglang and llama.cpp can do that pretty well these days. And it's, you know, half the price and you're not locked into the Nvidia ecosystem
ejpir 6 hours ago [-]
unfortunately the bigger models are pretty slow in token speed. The memory is just not that fast.
You can check what each model does on AMD Strix halo here:
With $5k you have to make compromises. Which compromises you are willing to make depends on what you want to do - and so there will be different optimal setup.
oofbey 7 hours ago [-]
DGX Spark is a fantastic option at this price point. You get 128GB VRAM which is extremely difficult to get at this price point. Also it’s a fairly fast GPU. And stupidly fast networking - 200gbps or 400gbps mellanox if you find coin for another one.
ekropotin 6 hours ago [-]
I’m not very well versed in this domain, but I think it’s not going to be “VRAM” (GDDR) memory, but rather “unified memory”, which is essentially RAM (some flavour of DDR5 I assume). These two types of memory has vastly different bandwidth.
I’m pretty curious to see any benchmarks on inference on VRAM vs UM.
banana_giraffe 2 hours ago [-]
A quick benchmark using float32 copies using torch cuda->cuda copies, comparing some random machines:
Raptor Lake + 5080: 380.63 GB/s
Raptor Lake (CPU for reference): 20.41 GB/s
GB10 (DGX Spark): 116.14 GB/s
GH200: 1697.39 GB/s
This is a "eh, it works" benchmarks, but should give you a feel for the relative performance of the different systems.
In practice, this means I can get something like 55 tokens a sec running a larger model like gpt-oss-120b-Q8_0 on the DGX Spark.
ekropotin 1 hours ago [-]
Nice! Thanks for that.
55 t/s is much better than I could expect.
oofbey 6 hours ago [-]
I’m using VRAM as shorthand for “memory which the AI chip can use” which I think is fairly common shorthand these days. For the spark is it unified, and has lower bandwidth than most any modern GPU. (About 300 GB/s which is comparable to an RTX 3060.)
So for an LLM inference is relatively slow because of that bandwidth, but you can load much bigger smarter models than you could on any consumer GPU.
BobbyJo 7 hours ago [-]
Internet seems to think the SW support for those is bad, and that strix halo boxes are better ROI.
oofbey 6 hours ago [-]
Meh. DGX is Arm and CUDA. Strix is X86 and ROCm. Cuda has better support than ROCm . And x86 has better support than Arm.
Nowadays I find most things work fine on Arm. Sometimes something needs to be built from source which is genuinely annoying. But moving from CUDA to ROCm is often more like a rewrite than a recompile.
overfeed 4 hours ago [-]
> But moving from CUDA to ROCm is often more like a rewrite than a recompile.
Isn't everyone* in this segment just using PyTorch for training, or wrappers like Ollama/vllm/llama.cpp for inference? None have a strict dependency on Cuda. PyTorch's AMD backend is solid (for supported platforms, and Strix Halo is supported).
* enthusiasts whose budget is in the $5k range. If you're vendor-locked to CUDA, Mac Mini and Strix Halo are immediately ruled out.
BobbyJo 6 hours ago [-]
CUDA != Driver support. Driver support seems to be what's spotty with DGX, and iirc Nvidia jas only committed to updates for 2 years or something.
borissk 7 hours ago [-]
Can even network 4 of these together, using a pretty cheap InfiniBand switch. There is a YouTube video of a guy building and benchmarking such setup.
For 5K one can get a desktop PC with RTX 5090, that has 3x more compute, but 4x less VRAM - so depending on the workload may be a better option.
ekropotin 6 hours ago [-]
VRAM vs UM is not exactly apples to apples comparison.
jeremie_strand 4 hours ago [-]
The AMD angle is interesting given the history — tinygrad has had to work around a lot of driver quirks to get ROCm into a usable state. At that price point, you're esentially betting on a software stack that NVIDIA has had years to stabilize. Would be curious to see real-world utilization numbers vs. a comparable NVIDIA setup.
latchkey 4 hours ago [-]
Old news. ROCm works a lot better now than it did a year ago.
Gigachad 4 hours ago [-]
You are still really limited in what you can run. So much stuff is cuda only.
latchkey 4 hours ago [-]
Like what? Most of the good stuff is ported over already and anything else, tag Anush on X and see what you get. Also happy to help.
The point is that they care now.
Gigachad 4 hours ago [-]
Tbh my experience is in the non AI uses, recently I was looking at Gaussian splatting tools and it seemed the majority of it was CUDA only. I’m also still bothered AMD for ages claimed my card (5700xt) would be getting rocm but just abandoned it.
latchkey 3 hours ago [-]
>I was looking at Gaussian splatting tools and it seemed the majority of it was CUDA only.
Not surprising. True, the ecosystem is like early OSX vs. Windows. Eventually it'll get ported over if there is demand.
djsjajah 3 hours ago [-]
trl.
give me a uv command to get that working.
But even in the amd stack things (like ck and aiter) consumer cards are not even second class citizens. They are a distance third at best.
If you just want to run vllm with the latest model, if you can get it running at all there are going to be paper cuts all along the way and even then the performance won't be close to what you could be getting out of the hardware.
latchkey 3 hours ago [-]
It is not perfect, but it isn't that bad anymore. Tons of improvements over the last year.
himata4113 6 hours ago [-]
exabox reads as if it was making a joke of something or someone. if it's real then it's really interesting!
vlovich123 7 hours ago [-]
Surprising to see this with AMD GPUs considering how George famously threw up his hands as AMD not being worth working with.
embedding-shape 7 hours ago [-]
Yeah, and labeling AMD "Driver Quality" as "Good" (for comparison, they label nvidia's driver quality as "Great").
Quite expensive little bastard. I wonder how much does it make sense to invest in a such device, if you can get $0.40/mtok from hyperbolic for example
sowbug 1 hours ago [-]
If you're OK letting them train on, and maybe keep, your data, then it's hard to beat cloud prices vs. local.
zahirbmirza 5 hours ago [-]
10 mil today... 1k in 10 years. Are OpenAI and Anthropic overvalued?
Gigachad 4 hours ago [-]
Looking at these prices I’m just thinking that as a user it makes no sense to buy this when you can just use the subsidised stuff from AI companies and then buy it a few years later at a tiny % of the cost.
kylehotchkiss 44 minutes ago [-]
Meanwhile M-series processors and Qwen are racing to do the same thing for a much more approachable price.
andai 6 hours ago [-]
Can someone explain the exabox? They say it "functions as a single GPU". Is there anything like that currently existing?
wmf 6 hours ago [-]
An NVL72 rack or Helios rack also "functions as a single GPU".
progbits 6 hours ago [-]
TPU pods
arunakt 1 hours ago [-]
Great idea, can you publish the power consumption units for this device
I can't find sources but I think they are building it for Comma.ai (geohot's other company) so that Comma can scale up their training datacenter.
orochimaaru 7 hours ago [-]
And... what about 20k lbs and 1360 cubic feet screams "tiny" :)
smoyer 7 hours ago [-]
That is very close to a half-length shipping container.
mayukh 7 hours ago [-]
A non-trivial share of this market won’t show up in public data.
That makes most estimates unreliable by default
spiderfarmer 7 hours ago [-]
VC funded startups
dist-epoch 6 hours ago [-]
A company which doesn't want the big LLM providers to see it's prompts or data - military, health, finance, research
sudo_cowsay 7 hours ago [-]
I always wonder about these expensive products: Does the company make them once its ordered or do they just make them beforehand?
wmf 2 hours ago [-]
He builds a batch every few months.
cyanydeez 5 hours ago [-]
In this case, they're taking wire transfers, so they're definitely building them once they get the cash.
operatingthetan 7 hours ago [-]
Are we at the point where 2x 9070XT's are a viable LLM platform? (I know this has 4, just wondering for myself).
oceanplexian 7 hours ago [-]
These things don’t have Flash Attention or either have a really hacked together version of it. Is it viable for a hobby? Sure. Is it viable for a serious workload with all the optimizations, CUDA, etc.. Not really.
cyanydeez 5 hours ago [-]
I'd go with strix halo if you're looking at that old of tech.
the latest AMD GPUs are RX 9070 XT w/32GB each
mememememememo 2 hours ago [-]
Give me token/s for favourite models.
orliesaurus 7 hours ago [-]
I wonder if this is frontpage right now because of the other tiiny (the names are similar) video that went viral ... which turns out wasn't an actual product by the tinygrad linked in this post[1]
Adding this to my list of ~beautifully~ designed things to buy when I win the lottery.
ppap3 6 hours ago [-]
I thought there was a typo in the price
rpastuszak 5 hours ago [-]
Who is this for?
throwatdem12311 7 hours ago [-]
Finally, a computer that should be able to run Monster Hunter Wilds with decent performance.
But let’s be real, 12k is kinda pushing it - what kind of people are gonna spend $65k or even $10M (lmao WTAF) on a boutique thing like this. I dont think these kinds of things go in datacenters (happy to be corrected) and they are way too expensive (and probably way too HOT) to just go in a home or even an office “closet”.
oofbey 7 hours ago [-]
It’s not for people to buy. It’s for companies to buy. Compare to salary, and it’s cheap.
aziaziazi 6 hours ago [-]
> What's the goal of the tiny corp?
To accelerate. We will commoditize the petaflop and enable AI for everyone.
I had the same feeling as throwadem when reading this. Your comment clarify what they meant by "everyone"
throwatdem12311 7 hours ago [-]
What companies are buying this instead of like a Dell server or whatever?
flumpcakes 6 hours ago [-]
These specs look enormously cheaper than doing it with dell servers. The last quote I had for a bog standard dell server was $50k and only if bought in the next few days or so. The prices are going up weekly.
throwatdem12311 6 hours ago [-]
So what’s the catch? If it seems too good to be true it probably is.
wmf 3 hours ago [-]
These are "unsupported" configurations. Nvidia/AMD discourage running multiple gaming/workstation cards and encourage customers to buy $500K SXM/OAM servers.
lostmsu 7 hours ago [-]
Hm, I compared my salary with $10M and it doesn't feel cheap. I guess skill issue.
throwatdem12311 6 hours ago [-]
But how will I make ad-supported youtube videos about how I automated my life with OpenClaw using a $10M boutique AI server to make a few thousand in ad revenue while burning tens of thousands per month on API cost.
renewiltord 4 hours ago [-]
I have 8x RTX 6000 Pro. Better to run the 300 W version of the cards. And it costs close to their 4x version. I get why they make it so big. So you can cool it at home. I prefer to just put in datacenter. Much cheaper power.
aabaker99 6 hours ago [-]
> Can I pay with something besides wire transfer?
In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. Wire transfer is the only accepted form of payment.
Sorry, what? Is this just a scam?
101008 6 hours ago [-]
Wire transfer has no comission or extra costs associated to it, so I find it very honest.
ejpir 6 hours ago [-]
man, cmon. a little more effort.
aabaker99 6 hours ago [-]
Sure thing. For those who don’t know, wiring money like this is a good way to lose your money.
Wire transfer is a bank transfer, not money wire to Western Union and like.
aabaker99 5 hours ago [-]
Yeah I agree the FTC article could be more clear here. I think they call out Western Union because those are tools that are commonly used by scammers.
But let’s be clear: the risks are the same if you are wiring money through Western Union or wiring through any other bank. Once you wire the money you do not have the same protections as other payment mechanisms. And if you don’t get the product as described, you are likely out your money. This is compared to other forms of payment like credit cards where you are protected. With a credit card you can issue a charge back to the seller and get your money back in the case of fraud. With a wire transfer you cannot.
Theres a lot there that makes sense & I think needs to be considered. But a lot just seems to be out of the blue, included without connection, in my view. Feels like maybe are in-grouo messages, that I don't understand. How this is headered as against democracy is unclear to me, and revolting. I both think we must grapple with the world as it is, and this post is in that area, strongly, but to let fear be the dominant ruling emotion is one of the main definitions of conservativism, and it's use here to scare us sounds bad.
kelvinjps10 6 hours ago [-]
He was always defending democracy and freedom before, and that was his argument for the local AI thing? What changed?
pencilheads 7 hours ago [-]
Geohot has always been an arrogant cunt who thinks he's better than everyone else. That blog post is totally on brand.
tadfisher 7 hours ago [-]
For those unaware, Mencius Moldbug is the pen name of Curtis Yarvin, thought leader for the Silicon Valley branch of right-wing technofascist weirdos which includes Peter Thiel and apparently half of a16z.
fragmede 7 hours ago [-]
Damn, that's a take.
stale2002 6 hours ago [-]
Geohotz's politics are fairly straightforward once you understand his background. Geohotz is the prodigy child who, at the age of ~16 accomplished amazing technical feats on his own.
And his politics are a derivative of Great Man Theory, and his positions on things like democracy follow from that. This idea, and those espoused by some of the VC/tech elite like Peter Theil are that singular hardworking genius individuals can change the world on their own, and everyone who not in this top 0.1% are borderline NPCs.
They do this both because of their genius/hardwork, and also because they are willing to break the rules that are set forth by this bottom 99.9%.
I'm starting to call this ideology Authoritarian techno-Libertarianism. Its a delibriately oxymoronic name that I use, because these "Great Men" are definitely trying to change the world. IE, they are trying to impose their goals and values on the world without getting the buyin of other people.
Thats the "authoritarian" part. And then the "libertarian" part is that they are going about this imposition of their will on the world by doing it all themselves, through their own hard work.
Think "Person invents a world changing technology, that some people thing is bad, and just releases it open source for anyone to use". AI models are a great example, in fact. Once that technology is out there the genie cannot be put back into the bottle and a ton of people are going to lose their jobs, ect.
A distain for democracy follows directly from things like this. You dont wait for people to vote to allow you to change the world by inventing something new. You just do and watch the results.
overfeed 4 hours ago [-]
> also because they are willing to break the rules that are set forth by this bottom 99.9%[...] they are going about this imposition of their will on the world by doing it all themselves, through their own hard work.
I think all these wildly successful neo-feudalists get increasingly emboldened the more they get away with bigger and bigger social infractions.
It's also clear that they haven't experienced existed an environment with extreme inequality - it's not safe for anyone there! They think the NPC plebs will continue to follow "the rules" ad perpetuam without considering that it is a direct result of the stability they are actively undermining. They clearly don't read enough history.
3 hours ago [-]
SilverElfin 5 hours ago [-]
What makes it “Libertarianism” still? To me it feels like they’re taking away freedom, control, and influence from everyone who is not them. Even the concentration of wealth is itself taking away everyone else’s places in the world.
LogicFailsMe 4 hours ago [-]
Scratch a libertarian and a fascist bleeds libertarianism here, no?
6 hours ago [-]
flykespice 6 hours ago [-]
"tiny" and it's 20k lbs and cost about 10k...
Since when did our perception of tiny blow out of size in tech? Is it the influence of "hello world" eletron apps consuming 100mb of mem while idle setting the new standard? Anyway being an AI bro seems like an expensive hobby...
WWilliam 32 minutes ago [-]
[dead]
caijia 1 hours ago [-]
[dead]
aplomb1026 4 hours ago [-]
[dead]
baibai008989 6 hours ago [-]
[dead]
3 hours ago [-]
3 hours ago [-]
Heer_J 7 hours ago [-]
[dead]
pink_eye 7 hours ago [-]
[flagged]
fhn 5 hours ago [-]
"but if you haven't contributed to tinygrad your application won't be considered" this company expects people to work for free?
paxys 5 hours ago [-]
> See our bounty page to judge if you might be a good fit. Bounties pay you while judging that fit.
Literally the line above that
roarcher 2 hours ago [-]
They MIGHT pay you IF you're a fit. They're bounties, i.e. spec work. They also pay a max of $1000, most of them significantly less. You can see more info at the link in that line:
> All bounties paid out at my (geohot) discretion. Code must be clean and maintainable without serious hacks.
No thanks. If you want to try before you buy, have your candidates do a paid test project. Founders need to stop acting like it's a privilege to work for them. Any talent worth hiring has plenty of other options that will treat them with respect.
Still, this is a great idea, and one I hope takes off. I think there's a good argument that the future of AI is in locally-trained models for everyone, rather than relying on a big company's own model.
One thought: The ability to conveniently get this onto a 240v circuit would be nice. Having to find two different 120v circuits to plug this into will be a pain for many folks.
* this section written by me typing on keyboard *
* this section produced by AI *
And usually both exist in document and lengthy communications. This gets what I wanted across with exactly my intention and then I can attach 10x length worth of AI appendix that would be helpful indexing and references.
Are references helpful when they're generated? The reader could've generated them themselves. References would be helpful if they were personal references of stuff you actually read and curated. The value then would be getting your taste. References from an AI may well be good-looking nonsense.
Wouldn't there be a massive duplication of effort in that case? It'll be interesting to see how the costs play out. There are security benefits to think about as well in keeping things local-first.
Maybe the volume for them is ok that well-intentioned but poor quality PRs can be politely(or otherwise, culture depending) disregarded and the method of generation is not important.
Then you could focus fire, like the script kiddies did with DDoS in the old days on fixing whatever preferred issues you have.
fundamentally, looks like they are shipping consumer off-the-shelf hardwares in a custom box.
Or could be the server edition 6000s that just have a heatsink and rely on the case to drive air through them, those are 600W cards.
(I work for an electrical contractor so my sense of ease might be overcorrecting)
If it shipped with like 4090+ (for a higher price) it’d be more tempting.
https://x.com/__tinygrad__/status/1983917797781426511
Stopped due to raising GPU prices:
https://x.com/__tinygrad__/status/2011263292753526978
So basically you need a brand new circuit run if you don't have two 120V circuits next to each other. But if you're spending $65k on a single machine, an extra grand for an electrician to run conduit should be peanuts. While you're at it I would def add a whole-home GFCI, lightning/EMI arrestor, and a UPS at the outlet, so one big shock doesn't send $65k down the toilet.
He's not saying you would use it as two separate 120v circuits sharing a ground but rather as a single 240v circuit. His point is that it's easy to rewire for 240v since it's the same as all the other wiring in your house just with both poles exposed.
Of course you do have to run a new wire rather than repurpose what's already in the wall since you need the entire circuit to yourself. So I think it's not as trivial as he's making out.
But then at that wattage you'll also want to punch an exhaust fan in for waste heat so it's not like you won't already be making some modifications.
The only place where there's isolation is stuff like USB ports to avoid dangerous ground loop currents.
That said I believe the PSU itself provides full isolation and won't backfeed so using two on separate circuits should (maybe?) be safe. Although if one circuit tripped the other PSU would immediately be way over capacity. Hopefully that doesn't cause an extended brownout before the second one disables itself.
Can confirm.
Has this guy never worked on a B2B product before? Nobody is going to order a $10 million piece of infrastructure through your website's order form. And they are definitely going to want to negotiate something, even if it's just a warranty. And you'll do it because they're waving a $10 million check in your face.
The tone of this website is arrogant to the point of being almost hostile. The guy behind this seems to think that his name carries enough weight to dictate terms like this, among other things like requiring candidates to have already contributed to the project to even be considered for a job. I would be extremely surprised if anyone except him thinks he's that important.
> Can you fill out this supplier onboarding form?
That's very important context, as anyone who has been asked to fill out a supplier onboarding form (hi) will attest.
Edit: found a third party referencing the claim but it doesn't belong in the title here I think:
Meet the World’s Smallest ‘Supercomputer’ from Tiiny AI; A Machine Bold Enough to Run 120B AI Models Right in the Palm of Your Hand
https://wccftech.com/meet-the-worlds-smallest-supercomputer-...
Now I'm wondering if the HN title was submitted by some AI bot that couldn't tell the difference.
I'm running a 70b model now that's okay, but it's still fairly tight. And I've got 16gb more vram then the red v2.
I'm also confused why this is 12U. My whole rig is 4u.
The green v2 has better GPUs. But for $65k, I'd expect a much better CPU and 256gb of RAM. It's not like a threadripper 7000 is going to break the bank.
I'm glad this exists but it's... honestly pretty perplexing
I don't see the 120B claim on the page itself. Unless the page has been edited, I think it's something the submitter added.
I agree, though. The only way you're running 120B models on that device is either extreme quantization or by offloading layers to the CPU. Neither will be a good experience.
These aren't a good value buy unless you compare them to fully supported offerings from the big players.
It's going to be hard to target a market where most people know they can put together the exact same system for thousands of dollars less and have it assembled in an afternoon. RTX 6000 96GB cards are in stock at Newegg for $9000 right now which leaves almost $30,000 for the rest of the system. Even with today's RAM prices it's not hard to do better than that CPU and 256GB of RAM when you have a $30,000 budget.
The thing that’s less useful is the 64G VRAM/128G System RAM config, even the large MoE models only need 20B for the router, the rest of the VRAM is essentially wasted (Mixing experts between VRAM and/System RAM has basically no performance benefit).
I imagine that's because they are buying a single SKU for the shell/case. I imagine their answer to your question would be: In order to keep prices low and quality high, we don't offer any customization to the server dimensions
But yeah, 4x Blackwell 6000s are ~32-36k, not sure where the other $30k is going.
A 120B model cannot fit on 4 x 24GB GPUs at full quantization.
Either you're confusing this with the 20B model, or you have 48GB modded 3090s.
EDIT: Either they edited that to say "quad 3090s", or I just missed it the first time.
Can't you offload KV to system RAM, or even storage? It would make it possible to run with longer contexts, even with some overhead. AIUI, local AI frameworks include support for caching some of the KV in VRAM, using a LRU policy, so the overhead would be tolerable.
With that said, people are trying to extend VRAM into system RAM or even NVMe storage, but as soon as you hit the PCI bus with the high bandwidth layers like KV cache, you eliminate a lot of the performance benefit that you get from having fast memory near the GPU die.
Only useful for prefill (given the usual discrete-GPU setup; iGPU/APU/unified memory is different and can basically be treated as VRAM-only, though a bit slower) since the PCIe bus becomes a severe bottleneck otherwise as soon as you offload more than a tiny fraction of the memory workload to system memory/NVMe. For decode, you're better off running entire layers (including expert layers) on the CPU, which local AI frameworks support out of the box. (CPU-run layers can in turn offload to storage for model parameters/KV cache as a last resort. But if you offload too much to storage (insufficient RAM cache) that then dominates the overhead and basically everything else becomes irrelevant.)"
I have no idea who would buy this. Maybe if you think Vera Rubin is three years out? But NV ships, man, they are shipping.
Can it run Crysis?
-- Jensen Huang
A single box with those specs without having to build/configure (the red and green) - I could see being useful if you had $ and not time to build/configure/etc yourself.
Obviously any Turing machine can run any size of model, so the “120B” claim doesn’t mean much - what actually matters is speed and I just don’t believe this can be speedy enough on models that my $5000 5090-based pc is too slow for and lacks enough vram for.
120B could run, but I wouldn't want to be the person who had to use it for anything.
To be fair, the 120B claim doesn't appear on the webpage. I don't know where it came from, other than the person who submitted this to HN
Also nobody is comparing this box to an $10M nVidia rack scale deployment. They're comparing it to putting all of the same parts into their Newegg basket and putting it together themself.
I almost sure it’s possible to custom build a machine as powerful as their red v2 within 9k budget. And have a lot of fun along the way.
So, context is probably more $/programming worth than inference speed.
This is already solved by running LM Studio on a normal computer.
"likely" doesn't inspire much confidence. Surely, they have those numbers, and if it was, they'd publicize the comparisons.
I'm currently shopping for offline hardware and it is very hard to estimate the performance I will get before dropping $12K, and would love to have a baseline that I can at least always get e.g. 40 tok/s running GPT-OSS-120B using Ollama on Ubuntu out of the box.
I think Tinygrad should think about recycling. Are they planning ahead in this regard? Is anyone? My thought is if there was a central database of who own what and where, at least when the recycling tech become available, people will know where to source their specific trash (and even pay for it.) Having a database like that in the first place could even fuel the industry.
$12,000, $65,000, $10,000,000.
the town near my hometown has 650 – 800 houses (according to chatgpt).
crazy.
A typical home just consumes rather little energy, now that LED lighting and heat pump cooling / heating became the norm.
Can they/someone else give more details as to what workloads pytorch is more than 2x slower than the hardware provides? Most of the papers use standard components and I assume pytorch is already pretty performant at implementing them at 50+% of extractable performance from typical GPUs.
If they mean more esoteric stuff that requires writing custom kernels to get good performance out of the chips, then that's a different issue.
With 6 GPUs you have to deal with risers, pcie retimers, dual PSUs and custom case for so value proposition there was much better IMO
It's funny though... we're using deepseek now for features in our service and based on our customer-type we thought that they would be completely against sending their data to a third-party. We thought we'd have to do everything locally. But they seem ok with deepseek which is practically free. And the few customers that still worry about privacy may not justify such a high price point.
If private inference is actually non-negotiable, then sure, put GPUs in your colo and enjoy the infra pain, vendor weirdness, and the meeting where finance learns what those power numbers meant.
Not revolutionary in any way, but nice. Unless I'm missing something here?
He's an interesting guy. Seems to be one who does things the way he thinks is right, regardless of corporate profits.
I could swear I filed a GitHub issue asking about the plans for that but I don't see it. Anyway I think he mentioned it when explaining tinygrad at one point and I have wondered why that hasn't got more attention.
As far as boxes, I wish that there were more MI355X available for normal hourly rental. Or any.
* RAM - $1500 - Crucial Pro 128GB Kit (2x64GB) DDR5 RAM, 5600MHz CP2K64G56C46U5, up to 4 sticks for 128GB or 256GB, Amazon
* GPU - $4700 - RTX Pro 5000 48GB, Microcenter
* CPU/Mobo bundle - $1100 - AMD Ryzen 7 9800X3D, MSI X870E-P Pro, ditch the 32GB RAM, Microcenter
* Case - $220, Hyte Y70, Microcenter
* Cooler - $155, Arctic Cooling Liquid Freezer III Pro, top-mount it, Microcenter
* PSU - $180, RM1000x, Microcenter
* SSD - $400 - Samsung 990 pRO 2TB gen 4 NVMe M.2
* Fans - $100 - 6x 120mm fans, 1x 140mm fan, of your choice
Look into models like Qwen 3.5
Mac Studio or Mac Mini, depending on which gives you the highest amount of unified memory for ~$5k.
Machines with the 4xx chips are coming next month so maybe wait a week or two.
It's soldered LPDDR5X with amd strix halo ... sglang and llama.cpp can do that pretty well these days. And it's, you know, half the price and you're not locked into the Nvidia ecosystem
You can check what each model does on AMD Strix halo here:
https://kyuz0.github.io/amd-strix-halo-toolboxes/
I’m pretty curious to see any benchmarks on inference on VRAM vs UM.
In practice, this means I can get something like 55 tokens a sec running a larger model like gpt-oss-120b-Q8_0 on the DGX Spark.
55 t/s is much better than I could expect.
So for an LLM inference is relatively slow because of that bandwidth, but you can load much bigger smarter models than you could on any consumer GPU.
Nowadays I find most things work fine on Arm. Sometimes something needs to be built from source which is genuinely annoying. But moving from CUDA to ROCm is often more like a rewrite than a recompile.
Isn't everyone* in this segment just using PyTorch for training, or wrappers like Ollama/vllm/llama.cpp for inference? None have a strict dependency on Cuda. PyTorch's AMD backend is solid (for supported platforms, and Strix Halo is supported).
* enthusiasts whose budget is in the $5k range. If you're vendor-locked to CUDA, Mac Mini and Strix Halo are immediately ruled out.
For 5K one can get a desktop PC with RTX 5090, that has 3x more compute, but 4x less VRAM - so depending on the workload may be a better option.
The point is that they care now.
Not surprising. True, the ecosystem is like early OSX vs. Windows. Eventually it'll get ported over if there is demand.
But even in the amd stack things (like ck and aiter) consumer cards are not even second class citizens. They are a distance third at best. If you just want to run vllm with the latest model, if you can get it running at all there are going to be paper cuts all along the way and even then the performance won't be close to what you could be getting out of the hardware.
720x RDNA5 AT0 XL 25,920 GB VRAM 23,040 GB System RAM
~ $10 Million
Who is the target market here?
the latest AMD GPUs are RX 9070 XT w/32GB each
[1]https://x.com/ShriKaranHanda/status/2035284883384553953
But let’s be real, 12k is kinda pushing it - what kind of people are gonna spend $65k or even $10M (lmao WTAF) on a boutique thing like this. I dont think these kinds of things go in datacenters (happy to be corrected) and they are way too expensive (and probably way too HOT) to just go in a home or even an office “closet”.
I had the same feeling as throwadem when reading this. Your comment clarify what they meant by "everyone"
Sorry, what? Is this just a scam?
https://consumer.ftc.gov/articles/what-know-you-wire-money
But let’s be clear: the risks are the same if you are wiring money through Western Union or wiring through any other bank. Once you wire the money you do not have the same protections as other payment mechanisms. And if you don’t get the product as described, you are likely out your money. This is compared to other forms of payment like credit cards where you are protected. With a credit card you can issue a charge back to the seller and get your money back in the case of fraud. With a wire transfer you cannot.
Theres a lot there that makes sense & I think needs to be considered. But a lot just seems to be out of the blue, included without connection, in my view. Feels like maybe are in-grouo messages, that I don't understand. How this is headered as against democracy is unclear to me, and revolting. I both think we must grapple with the world as it is, and this post is in that area, strongly, but to let fear be the dominant ruling emotion is one of the main definitions of conservativism, and it's use here to scare us sounds bad.
And his politics are a derivative of Great Man Theory, and his positions on things like democracy follow from that. This idea, and those espoused by some of the VC/tech elite like Peter Theil are that singular hardworking genius individuals can change the world on their own, and everyone who not in this top 0.1% are borderline NPCs.
They do this both because of their genius/hardwork, and also because they are willing to break the rules that are set forth by this bottom 99.9%.
I'm starting to call this ideology Authoritarian techno-Libertarianism. Its a delibriately oxymoronic name that I use, because these "Great Men" are definitely trying to change the world. IE, they are trying to impose their goals and values on the world without getting the buyin of other people.
Thats the "authoritarian" part. And then the "libertarian" part is that they are going about this imposition of their will on the world by doing it all themselves, through their own hard work.
Think "Person invents a world changing technology, that some people thing is bad, and just releases it open source for anyone to use". AI models are a great example, in fact. Once that technology is out there the genie cannot be put back into the bottle and a ton of people are going to lose their jobs, ect.
A distain for democracy follows directly from things like this. You dont wait for people to vote to allow you to change the world by inventing something new. You just do and watch the results.
I think all these wildly successful neo-feudalists get increasingly emboldened the more they get away with bigger and bigger social infractions.
It's also clear that they haven't experienced existed an environment with extreme inequality - it's not safe for anyone there! They think the NPC plebs will continue to follow "the rules" ad perpetuam without considering that it is a direct result of the stability they are actively undermining. They clearly don't read enough history.
Since when did our perception of tiny blow out of size in tech? Is it the influence of "hello world" eletron apps consuming 100mb of mem while idle setting the new standard? Anyway being an AI bro seems like an expensive hobby...
Literally the line above that
> All bounties paid out at my (geohot) discretion. Code must be clean and maintainable without serious hacks.
No thanks. If you want to try before you buy, have your candidates do a paid test project. Founders need to stop acting like it's a privilege to work for them. Any talent worth hiring has plenty of other options that will treat them with respect.