Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Oxide builds servers as they should be [audio] (changelog.com)
179 points by tosh on July 9, 2022 | hide | past | favorite | 127 comments


If anyone from Oxide is here, could you guys restart the "On the metal" podcast . It was the only CS related podcast I ever liked.


"On the metal" was indeed lots of fun; I understand that it's been "succeeded" by Oxide's Twitter spaces, https://github.com/oxidecomputer/twitter-spaces


Twitter spaces are awful however and are also not easily recorded and can't be used easily on any device. (You need to go to some kind of youtube link from one of the participants to listen to a recording.) Also their twitter talks are more unfocused rants by the participants with little focus. So it's hard to listen to. I'm sure it's more fun for the participants, but not so much for the listeners. You also lose the comfort of consistency of participants.

Some of them have even been delving into some of the participants personal politics as well which is just something I'm not interested in hearing.


We've turned it into a podcast as well, so you should be able to enjoy it wherever you consume podcasts.[0]

While there may be some unfocused rants in there (sorry, I guess?), there's also a lot of extraordinary technical content -- certainly, if anyone else has described their board bringup experiences in as explicit technical detail as we have in [1] and [2], I would love to be pointed to it!

[0] https://podbay.fm/p/oxide-and-friends

[1] Tales from the Bringup Lab: https://podbay.fm/p/oxide-and-friends/e/1638838800

[2] More Tales from the Bringup Lab: https://podbay.fm/p/oxide-and-friends/e/1650326400


Thanks for the link for a podcast version. That's helpful. I did listen to those two talks earlier and those were probably the most interesting. I did dislike that there seemed to be a constant process of interrupting each other in the middle of someone telling a story so the story became quite fragmented as it took some time to come back to the topic (or the topic was forgotten entirely and never elaborated on after the interruption).

Another thing that ended up being frustrating is repeated references to some image and I had to pause and go digging through people's personal twitter accounts to try and find the image that was being talked about. There seemed to be an assumption that the listener follows all employees personal twitter accounts.

There's also audio quality issues in general as it's not nearly as good a quality of media equipment as was used for the original on the metal podcast. (Maybe it's similarly cheap equipment, but it was better quality before regardless.)


I love those little interruptions. Like when a guest is speaking on a topic and they bring up a name or company of the past. And that segues into another bit of SV lore.


If you haven't already: Maybe an "episode" directing listeners to the new stuff?

[I wouldn't have known if I hadn't stumbled onto these replies. :( ]


The audio quality seemed a bit rough on the few I checked out, but I'll try again.


The Oxide Twitter spaces are recorded and distributed as a podcast [1]. If you're not a fan of the content it won't help but it's at least more convenient to consume than through Twitter.

[1] https://feeds.transistor.fm/oxide-and-friends


That's such a shame. I've tried listening to those recordings and it's hard to follow without the context of the "Twitter Space" which I'm not even sure what is because I don't use Twitter. The audio quality is also quite grating to listen to.

Would love recommendations for in depth, tech related podcasts that stand on their on, like On The Metal.


IMO the recordings stand just fine on their own. Even when I'm in the Twitter space live, I'm certainly not looking at tweets or anything else that's happening on the screen, unless I want to talk.


Thanks. I'll check it out. On the last podcast they had Ken Shirriff and it was what got me hooked.


We're (obviously!) huge fans of On the Metal[0] too (and the episode you cite with Ken was indeed extraordinary[1]!) -- but we have come to like our Oxide and Friends[2] Twitter Space even more. The reasons why are manifold, and really merit their own long-form piece, but as a few examples of episodes that show why we find it so compelling, see "Tales from the Bringup Lab"[3], "Theranos, Silicon Valley, and the March Madness of Tech Fraud"[4], "Another LPC55 Vulnerability"[5], "The Sidecar Switch"[6], "The Pragmatism of Hubris"[7], "Debugging Methodologies"[8], or "The Books in the Box"[9].

There's tons more where that came from; if you are a fan of On the Metal, I don't think you'll be disappointed -- and it has the added advantage that you join a future conversation!

[0] On The Metal: https://podbay.fm/p/on-the-metal

[1] Ken Shirriff: https://podbay.fm/p/on-the-metal/e/1611669600

[2] Friends of Oxide: https://podbay.fm/p/oxide-and-friends

[3] Tales from the Bringup Lab: https://podbay.fm/p/oxide-and-friends/e/1638838800

[4] Theranos, Silicon Valley, and the March Madness of Tech Fraud: https://podbay.fm/p/oxide-and-friends/e/1632182400

[5] Another LPC55 Vulnerability: https://podbay.fm/p/oxide-and-friends/e/1649116800

[6] The Pragmatism of Hubris: https://podbay.fm/p/oxide-and-friends/e/1639443600

[7] The Sidecar Switch: https://podbay.fm/p/oxide-and-friends/e/1638234000

[8] Debugging Methodologies: https://podbay.fm/p/oxide-and-friends/e/1652745600

[9] The Books in the Box: https://podbay.fm/p/oxide-and-friends/e/1632787200


Oxide and Friends is good despite Twitter Spaces, but On the Metal was great. I hope to see it return.


Could you include a link to the new podcastified Twitter space in the footer of https://oxide.computer/ ?

And maybe do an addendum 5 minute announcement episode on the “on the metal” podcast?

Love your excellent work, thank you all


Yes, that's in the works: we have a website redesign coming up, and this was included in it. So stay tuned!

And the 5 minute announcement on the On the Metal feed is a great idea; thanks for the idea -- and for the kind words.

Finally, I hasten to add: we have a really exciting Space coming up on Monday[0], where we'll be joined by Jon Masters to talk about the importance of integrating hardware and software teams; join us!

[0] https://twitter.com/bcantrill/status/1545441245853495296


Other have posted the twitter link, but here is the youtube link: https://www.youtube.com/channel/UCFn4S3OexFT9YhxJ8GWdUYQ


Signals and Threads is equally as good as a CS podcast.


They've been posting their Twitter Spaces chats. https://github.com/oxidecomputer/twitter-spaces Not as focused as the podcast, but still very high quality content.


Who’s the target buyer of Oxide?

I ask genuinely … because I don’t understand who will spend the premium for a rack server that is easier to maintain?

In a world of cloud and dedicated hosted servers (where the users of the server is not the buyer of the server) - who would buy Oxide?

It seems like whoever can build the cheapest server wins now.

Said differently, cloud has completely changed the value-chain in server buyers


If Oxide computers had existed when I was working on sorting out the infrastructure for in-country domestic payments at Visa, I'd have lobbied pretty hard to adopt them. Assembling basically the same thing from a bunch of random vendors and terribly integrated software was an infuriating, expensive process that only achieved great results because we invested the time and money into it that I'd have rather budget allocated to writing core payments & settlement applications.


Plenty of companies still run hardware. A lot of midsize enterprise and a lot of non-tech/non-software F500. And quite a few won’t move to the cloud - between retailers who don’t want to fund big bad Amazon, hyperactive security departments who (rationally or irrationally) see physical ownership as a necessity, and organizations who have needs which are poorly suited to cloud offerings, it’s not like the datacenter hardware market is completely dead by a long shot.

You can proxy this demand by looking at VMWare sales. Anyone running a large amount of baremetal VMWare virtualization has a large on premise deployment and could find managed rack products like Oxide interesting.


I think the answer amounts to "anyone who has at least a rack full of Dell/HPE hardware".

Dell manages an annual revenue of greater than $20B, so on-prem HW is clearly going strong, regardless of whether or not you think it should be.


> who will spend the premium for a rack server that is easier to maintain?

Not my area, but based on other areas: "easier to maintain" can translate directly to "fewer engineer/admin hours consumed", which can translate directly to saved money (or resources that are now freed up to be put toward something more useful)


Banks, hedge funds, governments, … anyone who needs a lot of grunt and cannot offload to the cloud. I would say they target the same segment of buyers who go for Oracle’s “engineered” / ExaData / Exalogic machines.


Large enterprises running things on prem. Governments etc.


For normal businesses cost is important, for smart governments, hedge funds etc. one may have leeway to spend a little more for a better product or better tail risk etc.


People who have to maintain servers in house for various reasons. Maybe they need to run some software that needs to run even if they lose Internet connectivity. Or maybe they're working on secret stuff and their CIO/CTO has decided that this needs to be on prem instead of on the cloud.


Oxide business model explained:

You know how you can buy pizza in a cardboard box? What happens if you need to feed a party of ~80 people? You end up getting dozens of pizza boxes delivered.

Oxide Pizza delivers all that pizza in a single box.


dang so like all my pizzas are squished?


thats what the rack is for duh


Nah, the little plastic table placed in the center of the pizza is keeping the boxes from bowing.



I hope they target schools so the next generation goes to them to scale up their startups in the future.


ultimately somebody has to buy and own and operate the underlying hardware of the "cloud".

obviously AWS, azure, etc do their own hardware procurement and totally bespoke stuff in house.

so this would be targeted at any smaller players in hosting companies that are not at that massive size of azure, but need vast quantity of servers.


> so this would be targeted at any smaller players in hosting companies that are not at that massive size of azure, but need vast quantity of servers.

But my understanding Oxide is/will-be priced as a premium over a typical x86 server.

It seems unlikely that any hosting company is going to pay a premium just for a bit more convenience.

Note: I'm not trying to be a "hater". Just genuinely not understanding the market here.


you are correct, margins are very thin in hosting company business and the absolute most server performance/number of nodes for the least money is very important.

have been on the manufacturing side of this before.

bulk x86-64 servers such as you could buy from the vendors you meet at the computex taipei are a big part of it.

things along the same general idea as the supermicro that is 2U high and has four discrete nodes in it, but cheaper.

long/narrow motherboards with dual socket xeons

and other similar stuff...

if the price for this is not market competitive with that, or just a bunch of 1U commodity dual socket servers, the only people buying it will be enterprise/specialty end user and not hosting operations.


As I understand it, this isn't really a premium product in that sense.

What they seem to want to do is give you what a rack (or 2) full of Dell Servers plus the price of a VMWare license would give you.

And then you should also save money on maintenance, space, updates, compliance and so on.

If the approach to firmware and virtualization means you can have just 1/2 employ less then that's already a huge amount of money.


Does “premium over a typical x86 server” mean a premium over the cheapest servers you can find or a premium over servers sold by (e.g.) Dell?


What is missing is a middle ground: an improvement over the IPMI broken-state-of-things that is aimed for automated deployments, without demanding a certain network environment for PXE booting and configuration of IP addresses for management.

I haven't used RedFish but it seems it does not address the problems of IPMI in this regard: To be fully automatable there shouldn't be an assumption about the network environment - rather I would like that to have an L2-based protocol (one has to use the MAC address as identifier anyway) or if IPv4/v6 is used then at least demand mandatory link-local auto-addressing by default. However, RedFish seems to support emulating a CD-ROM drive for booting an installation media, this is a nice idea but what I really would like to see is directly writing an image to the hard disk. Then of coure the hardware manufacturers would need to write high-quality firmware and not the regular quirk- and bug-ridden stuff that we know from IPMI implementations.


Your last sentence reveals what you probably already know to be true: there isn't a middle ground. We tried for years to get hyperscaler-class software running on commodity-class servers -- and came to the conclusion that the hyperscalers came to (and as Alan Kay famously observed): to be really serious about elastic infrastructure, one needs to do one's own hardware. Now, ~2.5 years into Oxide, I can substantiate that with more detail that I could have ever imagined: there is no middle ground -- and indeed, this is why the infrastructure world has become so acutely bifurcated!


Do you think there's at least a place for a holistic system that's not a whole rack? Or do you think that those of us with smaller computing needs will just have to either rent from a hyperscaler or put up with PC servers?


Definitely, but it's not a market that Oxide intends to go after (at least not for the foreseeable future). Part of the challenge is that it really does require diverging from the reference designs (as these have effectively enshrined a proprietary BMC and everything that comes with it) -- and once you are away from reference designs, you are talking about a very different economic proposition. This is part of why we are so transparent and open about everything we're doing: our hope is that we can enable others to pick up the baton and serve those markets that we do not intend to address.


I have put our implementation of _exactly_ this in production last week.

Boot a "CD-ROM" via IPMI into a custom installer, flash image (generated from a Dockerfile!) to disk, reboot. Install takes ~15 seconds, full process (going via POST once) takes ~4 minutes. This also allows us to have a "hardware validation" image (that one doesn't get persisted to disk).

Not sure if there's any plans to make it public right now, but I'll ask around. Feel free to contact me via email (in profile)


Yes, automation with IPMI can be done (for a particular hardware due to quirks), and I recommend doing it (and I have done it) with a separate DHCP environment for the BMC NICs to be at least agnostic about the OS NIC's network environment.

So, I share your excitement when it works but I'm not happy with the out-of-the box experience of IPMI. I wish there was something better but unfortunately this is just one case of a split/mismatch of what HW people provide and what software people need.


> flash image (generated from a Dockerfile!)

Any hints about how to do this? I'd love to use something as nice to work with as a Dockerfile to build bare metal installs.


I wrote down the kernel of the idea quite a while ago here[0], however, the actual version we run nowadays is using UEFI and written in Go (installer) + Python (API that generates the ISO on demand)

Fly.io[1] also does this (although they boot the result in a VM, the concepts are the same)

[0]: https://blog.davidv.dev/docker-based-images-on-baremetal.htm...

[1]: https://fly.io/blog/docker-without-docker/


> but I'll ask around.

Please do. There's a need for it. :)


Why do we still use "CD" or "ISO" images in virtual environments?


https://oxide.computer/product

    CPU: 2,048 x86 cores (AMD Milan)
    Memory: Up to 32TB DRAM
    Storage: 1024TB of raw storage in NVMe
    Network switch: Intel Tofino 2
    Network speed: 100 Gbps
That's a pretty meaty server!


That's for the rack.


Was going to say.. if they fit than in a 4U I’m about to make my datacenter 8 times smaller


I saw an IBM-funded research project that fit 1,024 ARM cores and 2-4 TB of memory into a 3U box. The box didn't include the liquid cooling system you needed to keep the thing running.


You could get that today with a 2U 4N Altra Max system.


Oh, it's funny that you can buy that as a server now. The research system was from 2015 I think.


Imagine a Beowulf cluster of these …


Noob here: any idea of the approximate price tag that should be expected for such a monster?


Looking at the hardware, I'd wag somewhere in the $2-3M range. Depends on how big of a profit margin they want to make or how scrappy they feel like being (could probably go as low as ~$1M if they can make it up in volume).

If they're following the typical 1/5x pricing model for support, that'd be roughly $500k/yr/rack. But it's also hard to do that while simultaneously describing Dell as "rapacious".


I suspect they are not going to be cheap at all lol, aside from being a very nice hardware package they also have an entire custom hypervisor platform which is intended to be one of the big plusses for the platform.


Thanks!


Here is Bryans application video for YC120 2019: https://youtu.be/px9OjW7GB0Q


Definitely worth watching


Why do you say that? Comes off as rather high-and-mighty and out of touch.


> Comes off as rather high-and-mighty

Cantrill is, shall we say, very opinionated. And does not suffer fools gladly.

Edit: but also, it's a YC (Anti)Application video, so being high-and-mighty is practically a prerequisite.


Slightly off topic, but hey Jerod if you’re reading, here’s a vote to keep up the weekly brief Changelog episodes!


Ask questions about business model on an Oxide thread, and speculation abounds. Ask questions about podcasts, and all the founders show up with links.


The business model is straightforward: we sell servers, a rack at a time. You put them in your data center and enjoy a cloud deployment model but a “I own the metal” ownership model.

I write something longer in a previous thread: https://news.ycombinator.com/item?id=30678324


The business model is elaborated on it extensively in the linked conversation, for whatever it's worth.


according to the website for oxide theyre "Hardware, with the software baked in" so I'm worried this is going to be more like Cisco's hamfisted UCS platform or dell/HP lock in with drac and ilo and "hyperconverged" embedded Java fuckery that can never be patched.


I wrote a while back about the difference between "hyperconverged" and "hyperscalar": https://news.ycombinator.com/item?id=30688865


All of the source will be open and re-flashable from what I've read. So I assume that won't be a problem.


I’ve been relatively satisfied with HP’s iLO offering, in that it’s been sufficient for my needs, avoids many pointless fly/drives to the colo (or dealing with the comic misnomer that is “smart hands”), and is still getting the occasional update on 8-year old servers.

Even for servers in my building, I find it’s more convenient than a crash cart for most things (and of course my desk is a better work environment).


I don’t understand how Oxide is any different than what VCE attempted to accomplish from a converged infra perspective.


Didn't VCE still use consumer rack computers? All they were was basically a hardware reseller with their own racks. They re-sold Cisco stuff. They just repackaged everyone else's stuff as a value added service.


Disclaimer: I worked for VCE (now Dell/EMC)

VCE used Cisco C2xx and C4xx rackmount and BLxxx blade servers (before the EMC/Dell acquisition). Now, I believe they use all Dell hardware.


They designed and built their own hardware and wrote all their own firmware.


Cranston made a pretty good case for Oxide in a talk he gave - his argument as I understood it is that legacy BIOS is proprietary and insecure


> Cranston made a pretty good case for Oxide in a talk he gave

Hmmm wrong Bryan? Breaking Bad wouldn't look the same if you swapped them, I'm afraid.


Breaking Bad where Bryan Cantrill runs an illegal Bitcoin mining operation on Oxide servers under a NYC laundromat


The NSA tries to shut down Bryan Cranston so he goes underground, building illegally powerful servers in a commandeered Pixar storage room at night and sleeping under the stage of youth music venue 924 Gilman by day.


This is the funniest thing I’ve read today. Thank you!


His nemesis is El Salvadoran volcano crypto cartels, but Cantrill mounts a 51% attack and destroys their Magmacoin


“Jesse, we have to compile”


Funny because an Oxide co-founder is Jessie Frazelle.


printf("%s\n", my_name);


They're a Rust shop, so these days it would be:

println!("{my_name}");


"I am the one who port-knocks!"


Their driers are really slow because they use the exhaust heat from the server room.


There was another commenter who made the same mistake. I was going to correct them but thought better of it, then made the same mistake myself!


Awesome product. How much $$$?


I wish the Oxide web site had a little more detail on the software running on this. Does This use a custom hypervisor? If so what is it based on, and what are its capabilities?


We’re redoing the website, though I’m not sure if this specific information will be on it.

The host is using Illumos and bhyve for hypervisor stuff. I don’t work on that part of the stack personally so that’s the most detail I can give you.


Just to add some additional detail: the control plane (Omicron[0] -- named before the COVID variant!), the user-level portion of the hypervisor (Propolis[1]) and the storage subsystem (Crucible[2]) are all open source. And while our focus has been building them for our own product, we know that they are at least usable outside of Oxide![3][4]

[0] https://github.com/oxidecomputer/omicron

[1] https://github.com/oxidecomputer/propolis

[2] https://github.com/oxidecomputer/crucible

[3] https://artemis.sh/2022/03/14/propolis-oxide-at-home-pt1.htm...

[4] https://artemis.sh/2022/06/14/oxide-crucible.html


> The elevation of the room where the rack is installed must be below 9,842 feet (3,000 meters).

Why?


what does this accomplish that buying some open compute platform based x86-64 servers in whole-rack quantity does not?

Or a bunch of the the 4-servers-in-a-single-2U-package setup from Supermicro?

And then interconnecting them by your own choice of 100GbE switches at top of rack.


This company was created because somebody that actually built a large could with Dell/Supermicro suffered years and years of pain.

If you do as you suggest you will have to do a whole lot more, at the minimum you need to set-up (and pay) VMWare.

Then you will need to figure out how you do automated firmware upgrades, low level monitoring and if security is a concern you likely want to configure secure booting with attestation and things like that.

Consider how much work it is to do what you suggest, vs buying this rack. And then consider how good (and repeatable) the end result is.

That is at least the theory, see:

https://www.youtube.com/watch?v=vvZA9n3e5pc


If you think the only solution to large scale virtualization on top of x86-64 bare metal is to pay vmware... yikes.


Its not the only solution but its commercially by far the most successful one. Even if you use something else its still a whole lot of work.


Will Windows Server/HyperV and VMWare run on these?


Windows Server yes -- as a hardware virtualized guest. HyperV and VMware, no: we provide the hypervisor and control plane (and that that's built into the cost of the rack is very much part of the value proposition!).


Seems like this company has gotten a lot of hype but what so they actually do? Cloud hosting? Or selling physical servers? Both of those things sound like a terrible business tho..


Even a cursory glance of their website or listen to the episode pretty clearly explains they're building server hardware for hyperscalers. I.e. they want to build and sell the hardware that your hosting provider would buy to host your applications. I would guess they see quite a market for on-premises computing, particularly with companies that through legal or competitive reasons cannot or do not want to lose control of their data by hosting it on a cloud service.


Haven’t fully grocked how this will get huge. If there is a big market here, won’t the AWS Outposts of the world win this market?


Disclaimer: I work for AWS

I think Outpost is a great hybrid-ish solution for those companies with workloads in AWS that want to supplement that with on-premise workloads using the same APIs and tools they are already used to.

The use cases I see for Oxide are, like other have said, are for hyperscalers or hosting providers who want to have their own on-premise infrastructure that isn’t tied to another entity like AWS. Whether that’s because you have the in-house talent to manage it or for compliance reasons it’s required it does have a niche.


Currently, Outpost doesn’t provide any discounts over hosting the same workloads in the cloud, so it seems only useful if you can’t host in the public cloud. For now, they could easily be undercut and a solution like oxide has advantages.

But if outpost was nearly the same price as oxide, wouldn’t you always want access to the full AWS or GCP infrastructure and world class control plane versus something bespoke that only works in your own data center.

I think what Oxide has to bank on is that this isn’t a market that Google, Microsoft or AWS want to really compete in (e.g. it may require these companies to undercut themselves too much).


ok so.. what exactly counts as a hyperscaler? and how many of them are there?


People who design their data centers in at least rack-sized units and ideally data center sized units, and creates custom hardware fit for that purpose. Would be my personal definition.

Wikipedia has a short page with a different definition: https://en.wikipedia.org/wiki/Hyperscale_computing


Hyperscalers, not that many. But there are lots of other hosting providers and company who run their own infrastructure.


AWS Outposts are tragically limiting in so many ways. If you are a small shop they are great but if you don’t fit their model exactly you are SOL.

Same problem with Anthos.

You can work around the limitations by having a million points of presence with them and splitting up your workloads but there’s real costs associated with that model. Running 1 rack in 100 DCs is a lot harder than 10 racks in 10 DCs.


Fwiw I work with an engineer that came from a shop that used them. Long story short: they’re not remotely cost effective, are quite limited, and existing AWS tool chains run into all sorts of hair-pulling snags.

My gut is that they are for places that need on-Prem for compliance purposes for certain parts of their stack


Agree that today they aren’t competitive.

But that seems entirely due to AWS not currently being interested in this market versus something fundamentally limiting about their tech.


It's a private cloud rack aka hyperconverged infrastructure rack but they're allergic to standard terminology for some reason. I assume the market is cloud repatriation for startups.


Wikipedia says that hyperconverged infrastructure typically uses commercial off-the-shelf computers. So that's likely why they're allergic to it. As they're designing their own computers so the term isn't accurate.

https://en.wikipedia.org/wiki/Hyper-converged_infrastructure

Oxide racks won't be able to use anything other than Oxide computers (or maybe eventually third party computers that follow Oxide standards).


From what I gather, it’s extremely beefy, physical servers with DPUs built-in. I.e. perfect for large on-premise deployments while maintaining the high performance that DPUs enable, all as a single appliance.

It’s the kind of thing I would expect Nvidia or Dell to become big at as well, and Bryan Cantrill has enough leverage to pique my interest significantly here.


I don't think they're using DPUs.


what's a DPU?


Also known as: SmartNIC, and a bunch of other names, basically a NIC with an embedded CPU (usually ARM of some sort), so can offload some workload into the NIC.

An example is nvida's bluefield, there are others

That sort of thing is being used in a few areas, can even run ESXi on them.

There was a fun article on servethehome a while back using them to build an arm cluster (Link: https://www.servethehome.com/building-the-ultimate-x86-and-a... )


Datacenter processing unit, it's a rather new thing


with GPUs?


Listened to 18 min before stopping, not really sure companies values warranted so much time.

Furthermore pretty leading question to compare Brian to Steve Jobs forced me out.


Back from Sun times I remember Brian loving the limelight. Somebody comparing you to Jobs is the pinnacle of a limelight search in our industry.


He demurred to the comparison and made a solid book recommendation. Seems better to just skip through uninteresting segments with the +45s button.


A lot of people have been compared to Jobs. At the very least, Cranston has some substance.


* Cantrill


i like that this is a running joke in this thread now


Fork yeah!


More substance than Elizabeth Holmes at least.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: