I'm struggling to find a way to express my opinion about this video without seeming like a complete ass.
If the author's point was to make a low effort "ha ha AWS sucks" video, well sure: success, I guess.
Nobody outside of AWS sales is going to say AWS is cheaper.
But comparing the lowest end instances, and apparently, using ECS without seeming to understand how they're configuring or using it just makes their points about it being slower kind of useless. Yes you got some instances that were 5-10x slower than Hetzner. On it's own that's not particularly useful.
I thought, going in, that this was going to be along the lines of others I have seen, previously: you can generally get a reasonably beefy machine with a bunch of memory and local SSDs that will come in half or less the cost of a similar spec EC2 instance. That would've been a reasonable path to go. Add on that you don't have issues with noisy neighbors when running a dedicated box, and yeah - something people can learn from.
But this... Yeah. Nah. Sorry
Maybe try again but get some help speccing out the comparison configuration from folks who do have experience in this.
Unfortunately it will cost more to do a proper comparison with mid-range hardware.
What is the point you are trying to make? Are you saying that we would need to have someone on payroll to have a usable machine? Then why not just have... a SysAdmin?
Shared instances is something even European "cloud" providers can do so why is EC2 so much more expensive and slower?
Because people aren't going on AWS for EC2, they go on it to have access to RDS, S3, EKS, ELB, SNS, Cognito, etc. Large enough customers also don't pay list price for AWS.
Of the services you list, S3 is OK. I would rather admin an RDBMS than use RDS at small scale
> Large enough customers also don't pay list price for AWS.
At that scale the cost savings on not hiring sysadmins becomes much smaller, so what is the case for using AWS? The absolute cost savings will be huge.
I'm saying that if you do want to compare two different platforms on performance, it should probably be done in consultation with someone who has worked with it before.
To use an analogy it's like someone who's never driven a car, and really only read some basic articles about vehicles deciding to test the performance of two random vehicles.
Maybe one of them does suck, and is overpriced - but you're not getting the full picture if you never figured out that you've been driving it in first gear the whole time.
A better presentation would be to have someone make the best performance/price on AWS EC2, then someone else make the best performance/price on Hetzner and compare.
I myself used EC2 instances with locally attached NVMe drives with (mdadm) RAID-0 on BTRFS that was quite fast. It was for a CI/CD pipeline so only the config and the most recent build data needed to be kept. Either BTRFS or the CI/CD database (PostgreSQL I think) would eventually get corrupted and I'd run a rebuild script a few times a year.
i made a similar comment on the video a week ago. It is an AWFUL analysis, in almost every way. Which is shocking, because its so easy to show that AWS is overpriced and underpowered.
At this point managing AWS, Azure or other cloud providers is as complicated or more complicated than managing your own but at an enormous cost multiplier and if you have steady traffic workloads I'm not sure it makes sense for most companies other than burning money. You still need to pay a sysadmin to manage the cloud and the complexity of the ecosystem is pretty brutal. Combine that with random changes in shit that causes problems like when we got locked out of our Azure account because they changed how part of their roles system works. I've also seen people not understanding the complexity of permissions etc and giving way to much access to people who should not have access.
For what it's worth - my day job does involve running a bunch of infrastructure on AWS. I know it's not good value, but that's the direction the organisation went long before I joined them.
Previous companies I worked for had their infrastructure hosted with the likes of Rackspace, Softlayer, and others. Every now and then someone from management would come back from an AWS conference saying how they'd been offered $megabucks in AWS Credit if only we'd sign an agreement to move over. We'd re-run the numbers on what our infrastructure would cost on AWS and send it back - and that would stop the questions dead every time.
So, I'm not exactly tied to doing it one way or another.
I do still think though that if you're going to do a comparison on price and performance between two things, you should at least be somewhat experienced with them first, OR involve someone who is.
The author spun up an ECS cluster and then is talking about being unsure of how it works. It's still not clear whether they spun up Fargate nodes or EC2 instances. There's talk of performance variations between runs. All of these things raise questions about their testing methodology.
So, yeah, AWS is over-priced and under-performing by comparison with just spinning up a machine on Hetzner.
But at least get some basics right. I don't think that's too much to ask.
On the "value" question, it's worth considering why so many tech savvy firms with infra-as-code chops remain with GCP or AWS. It's unlikely, given how such firms work, they find no value in this.
FWIW, I firmly believe non "cloud native" platforms should be hosted using PXE-booted bare metal withing the physical network constructs that cloud provider software-defined-network abstractions are designed to emulate.
As a CTO of a number of small startups, I am still struggling to understand what exactly AWS and other cloud providers give you to justify the markup.
And yes we’ve been heavy users of both AWS and Google Cloud for years, mainly because of the credits they initially provided, but also used VMs, dedicated servers and other services from Hetzner and OVH extensively.
In my experience, in terms of availability and security there’s not much difference in practice. There are tons of good tools nowadays to treat a physical server or a cluster of them as a cloud or a PaaS, it’s not really more work or responsibility, often it is actually simpler depending on the setup you choose. Most workloads do not require flexible compute capability and it’s also easy and fast to get it from these cheaper providers when you need to.
I feel like the industry has collectively accepted that Cloud prices are a cost of doing business and unquestionable, “nobody ever got fired for choosing IBM”. Thinking about costs from first principles is an important part of being an engineer.
When your cheap dedicated server goes down and your admin is on holiday and you have hundreds of angry customers calling you, you'll get it.
Or you need to restore your Postgres database and you find out that the backups didn't work.
And finally you have a brilliant idea of hiring a second $150k/year dev ops admin so that at least one is always working and they can check each other's work. Suddenly, you're spending $300k on two dev ops admins alone and the cost savings of using cheaper dedicated servers are completely gone.
When your AWS bill suddenly spikes to $69k because some data science intern left a huge gpu backed EC2 instance running in ap-southeast-2 with a misconfigured auto-scaling group, and your CTO is at a "digital transformation" conference, and you have hundreds of angry investors asking why your burn rate tripled, you’ll get it.
Or you need to debug why your Lambda function is throttling and you find out that the CloudWatch logs were never properly configured and you’ve been flying blind for three months.
And finally you have a brilliant idea of hiring a second $150k/year AWS solutions architect so that at least one person can actually understand the bill and they can check each other’s Terraform configs. Suddenly, you’re spending $300k on two cloud wizards alone and the cost savings of "not managing your own infrastructure" are completely gone.
> When your cheap dedicated server goes down and your admin is on holiday and you have hundreds of angry customers calling you, you'll get it.
Or when you need to post on Hackernews to get support from your cloud provider as locked out of your account, being ignored and the only way to get access is try to create as much noise as possible it gets spotted.
Or your cloud provider wipes your account and you are a $135B pension fund [1]
Or your cloud portfolio is so big you need a "platform" team of multiple devops/developer staff to build wrappers around/package up your cloud provider for you and your platform team is now the bottleneck.
Cloud is useful but it's not as pain free as everyone says when comparing with managing your own, it still costs money and work. Having worked on several cloud transformations they've all cost more and taken more effort than expected. A large proportion have also been canned/postponed/re-evaluated due to cost/size/time/complexity.
Unless you are a big spender with dedicated technical account manager, your support is likely to be as bad as a no name budget VPS provider.
Both cloud and traditional hosting have their merits and place.
Or when you need to post on Hackernews to get support from your cloud provider as locked out of your account, being ignored and the only way to get access is try to create as much noise as possible it gets spotted.
It is statistically far more likely that your cloud service will go down for hours or days, and you will have no recourse and will just have to wait till AWS manage to resolve it.
I suspect that this is really about liability. When AWS goes down you can just throw up your hands, everyone's in the same boat. If your own server goes down you worry that your customers doubt your competence.
It's actually kinda frustrating - as an industry we're accepting worse outcomes due to misperceptions. That's how the free market goes sometimes.
Nobody gets fired for hiring IBM. This is the new version, when you go down because AWS did its someone else’s fault. Of course AWS will compare their downtime to industry standards for on premise and conclude they are down less often. On Premise engineers can say until they are blue that their downtime is on a Sunday at 3 am because it doesn't impact their customers it doesn't seem to matter.
After 8 years operating like this, I have had approximately the same number of critical outages in standard Cloud as with these providers.
One included a whole OVH building burning down with our server in it, and recovery was faster than the recent AWS and Cloudflare outages. We felt less impotent and we could do more to mitigate the situation.
If you want to, these providers also offer VMs, object storage and other virtualized services for way cheaper with similar guarantees, they are not stuck in the last century.
And I don’t know how people are using cloud, but most config issues happen above the VM/Docker/Kubernetes level, which is the same wether you are on cloud or not. Even fully managed database deployments or serverless backends are not really that much simpler or less error-prone than deploying the containers yourself. Actually the complexity of Cloud is often a worse minefield of footguns, with their myriad artificial quirks and limitations. Often dealing with the true complexities of the underlying open-source technologies they are reselling ends up being easier and more predictable.
This fearmongering is really weakening us as an industry. Just try it, it is not as complex or dangerous as they claim.
It is not only not that much more complex, it is often less complex.
Higher-level services like PaaS (Heroku and above) genuinely do abstract a number of details. But EC2 is just renting pseudo-bare computers—they save no complexity, and they add more by being diskless and requiring networked storage (EBS). The main thing they give you is the ability to spin up arbitrarily many more identical instances at a moment’s notice (usually, at least theoretically, though the amount of the time that you actually hit unavailability or shadow quotas is surprisingly high).
I'm a geek and I like to tinker with hardware. I want to maximum my $/hardware and have built a ton of DIY computers myself since I was young. I'm all about getting the most hardware for the money.
But I'd like to sleep at night and the cost of AWS is not a significant issue to the business.
That’s fair enough but that’s a luxury position, if costs are not concern to you then there’s not much point in discussing the merits of different methods to manage infrastructure efficiently.
And yes of course such costs are nothing if you are thinking of $300K just on a couple sysadmins. But this is just a bizarre bubble in a handful of small areas in the US and I am not sure how it can stay like that for much longer in this era of remote work.
We built a whole business with $100K in seed and a few government grants. I have worked with quite a few world-class senior engineers happily making 40K-70K.
Don't get me wrong. If I'm starting a brand new business with my own money and no funding, I'd absolutely buying a cheap dedicated instance. In the past, AWS gave out generous credits to startups/new businesses. This is no longer the case.
Once my business requires reliability and I need to hire a dedicated person to manage, I'd absolutely move to the cloud. I personally like Digital Ocean/Render.
What prevents an EC2 instance from going down in exactly the same way? Other hosting providers offer automatic backup too - it's not an AWS exclusive feature.
So if you app enters a crash-loop and fails to start an AWS engineer comes in and fixes it? Because that has not been my experience..
The truth is that there's still a lot of things you have to handle, including cloud bugs and problems. And other problems you don't have to think about anymore, especially with fully managed, high-level PaaS- like services.
I ran a cloud backend service for a startup with users, using manged services, and we still had an on-call team. The cloud is not magic.
If we assume that you're a human being that sleeps, say 8 hrs/day, and not an LLM, that leaves you with 16 hours of consciousness a day, for an uptime of 66%. That's upsidedown nines. You don't even crack one nine of uptime. If we assume you've a diet of meth and cocaine, and only sleep 2 hours a day, that still puts you at only like 92% uptime.
Every team I have worked on so far, if using AWS you had 50-100% of the developers with the knowledge and credentials (and usually the confidence) to troubleshoot/just fix it/replace it.
Every team with dedicated hardware in a data center it was generally 1-2 people who would have fixed stuff quickly, no matter the size of the company (small ones, of course - so 10-50 devs). And that's with available replacement hardware.
I'm not even one of the "cloud is so great" people - but it you're generally doing software it's actually a lot less friction.
And while the ratio of cost difference may sound bad, it's generally not. Unless we're talkign huge scale, you can buy a lot of AWS crap for the yearly salary of a single person.
You said developers have the knowledge and credentials (and thus the work) of managing your infra, and a moment later basically asserted you're saving money on the salary for the sysadmin. This is the actual lie you got sold on.
AWS isn't going to help you setup your security, you have to do it yourself. Previously a sysadmin would do this, now it's the devs. They aren't going to monitor your database performance. Previously a sysadmin would do this, now it's the devs. They aren't going to setup your networking. Previously a sysadmin would do this, ...
Managing hardware and updating hosts is maybe 10% of the work of a sysadmin. You can't buy much on 1/10th of a sysadmins salary, and even the things you can, the quality and response time are generally going to be shit compared to someone who cares about your company (been there).
Yes, please continue explaining the job I did in the past to me.
It doesn't change anything, especially as I did not blatantly argue cloud=good,hardware=bad. That is a completely different question.
My point is that given some circumstances, you need a lot less specialized deep knowledge if all your software just works[tm] on a certain level of the stack upwards. Everyone knows the top 1/3 of the stack and you pay for the bottom 2/3 part.
I didn't mean to say "let's replace a sysadmin with some AWS stuff", my point was "100k per year on AWS makes a lot of small companies run".
Also my experience was with having hardware in several DCs around the world, and we did not have people there (small company, but present in at least 4 countries) - so we had to pay for remote hands and the experience was mostly bad . Maybe my bosses chose bad DCs, or maybe I'd trust sysadmins at "product companies" more than those working as remote hands at a hoster...
> Every team I have worked on so far, if using AWS you had 50-100% of the developers with the knowledge and credentials (and usually the confidence) to troubleshoot/just fix it/replace it.
is that because they were using AWS so hired people who knew AWS?
I would personally have far more confidence in my ability to troubleshoot or redeploy a dedicated server than the AWS services to replace it.
> Every team with dedicated hardware in a data center it was generally 1-2 people who would have fixed stuff quickly, no matter the size of the company (small ones, of course - so 10-50 devs). And that's with available replacement hardware.
There are lots of options for renting dedicated hardware, that the service provider will maintain,. Its still far cheaper than AWS. Even if you have redundancy for everything its still a lot cheaper.
Agreed, there's definitely a heavy element of that to it.
But, at the risk of again being labelled as an AWS Shill - there's also other benefits.
If your organisation needs to deploy some kind of security/compliance tools to help with getting (say) SOC2 certification - then there's a bunch of tools out there to help with that. All you have to do then is plug them into your AWS organisation. They can run a whole bunch of automated policy checks to say you're complying with whatever audit requirements.
If you're self-hosting, or using Hetzner - well, you're going to spend a whole lot more time providing evidence to auditors.
Same goes with integrating with vendors.
Maybe you want someone to load/save data for you - no problems, create an AWS S3 bucket and hand them an AWS IAM Role and they can do that. No handing over of creds.
There's a bunch of semi-managed services where a vendor will spin up EC2 instances running their special software, but since it's running in your account - you get more control/visiblity into it. Again, hand over an AWS IAM Role and off you go.
It's the Slack of IAAS - it might not be the fastest, it's definitely not the cheapest, and you can roll your own for sure. But then you miss out on all these integrations that make life easier.
What AWS gives you is the ability to spin up dozens if not thousands of hosts in a single click.
If you run your own hardware, getting stuff shipped to a datacenter and installed is 2 to 4 weeks (and potentially much longer based on how efficient your pipeline is)
What really needs thousands of hosts nowadays? Even if you have millions of users. Computers are plenty fast now and leveraging that is not any harder if you choose the right stack.
And even if you are building with microservices, most standard servers can handle dozens in a single machine at once. They are all mostly doing network calls with minimal compute. Even better actually if they are in the same host and the network doesn’t get involved.
If you want to, there are simple tools to hook a handful of them as a cluster and/or instantly spawn extra slightly costlier VMs in case of failure or a spike in usage, if a short outage is really a world-ending event, which it isn’t for almost every software system or business. These capabilities have not been exclusive to the major cloud providers for years.
Of course we are generalizing a lot by this point, I’d be happy to discuss specific cases.
If you own your own hardware, but you can provision a leased dedicated server from many different providers in an hour or three, and still pay far less than for comparable hardware from AWS.
I suspect that if you broke projects on AWS down by the numbers, the vast majority don't needed it.
There are other benefits to using AWS (and drawbacks) bit "easy scaling" isn't just premature optimisation because if you build something to do something it's never going to do that's not optimisation it's simply waste.
I'm not familiar with Hetzner personally, but maybe they mean the uplink? I've found that with some smaller providers, advertising 10Gbit, but you rarely get close to that speed in reality.
It makes sense to think about price in the context of your business. If your entire infra cost is a rounding error on your balance sheet, of course you would pick the provider with the best features and availability guarantees (choose IBM/AWS). If your infra cost makes up a significant percentage of your operating expenses, you will start spending engineering effort to lower the cost.
That's why AWS can get away with charging the prices they do, even though it is expensive, for most companies it is not expensive enough to make it worth their while to look for cheaper alternatives.
It’s often less about engineering effort and more about taking some small risks to try less mainstream (but still relatively mature) alternatives by reasoning from first principles and doing a bit of homework.
From our experience, you can actually end up in a situation that requires less engineering effort and be more stable, while saving on costs, if you dare to go to a bit lower abstraction layers. Sometimes being closer to the metal is simpler, not more complex. And in practice complexity is much more often the cause of outages rather than hardware reliability.
>>As a CTO of a number of small startups, I am still struggling to understand what exactly AWS and other cloud providers give you to justify the markup.
If you are having a company that warrants building a data center, then AWS does not add much.
Other wise you face the 'if you want to build apple pie from scratch, you need to first invent the universe' problem. Simply put you can get started right on day one, in a pay as you go model. Like you can write code, deploy and ship from the very first day, instead of having to go deep down the infrastructure rabbit hole.
Plus shutting down things is easy as well. Things don't workout? Good news! You can shut down the infrastructure that very day instead of having to worry about the capital expenditure spent to build infrastructure, and without having to worry about its use later.
Simply put, AWS is infrastructure you can hire and fire at will.
Herzner storage is a drop in replacement for S3. Even though there are some minor differences, it's not like the difference between a managed NAT and a router.
There's a distinction between just saying it's more expensive and saying it's slower at the same price. Compared to well spec'ed and administered dedicated servers, it's basically impossible to get the same performance from AWS (or other cloud services) at any price. Yes, there are advantages, scaling being the greatest one. But you won't get the same raw speed you can achieve with fast storage and processing in a single machine (or a tight network) through cloud services—probably at all, but certainly not for anywhere near the same price.
And if you are willing to pay, you can significantly over-provision dedicated servers, solving much of the scaling problem as well.
I'm migrating my last AWS services to dedicated servers with Gitops.
In principle, AWS give you a few benefits that are worth paying for. In practice, I have seen all of them to be massive issues.
Price and performance are obviously bad. More annoying than that, their systems have arbitrary limitations that you may not be aware of because they're considered 'corner cases' -- e.g. my small use-case bumped against DNS limitation and the streaming of replies was not supported.
Then, you have a fairly steep learning curve with their products and their configuration DSLs.
There are Gitops solution that give you all the benefits that are promised by it, without any of the downsides or compromises.
You just have to bite the bullet and learn kubernetes. It may be a bit more of a learning curve, but in my experience I would say not by much. And you have much more flexibility in the precise tech stack that you choose, so you can reduce it by using stuff you're already know well.
This is exactly true, and is something we have built our business around. In fact, I just kicked-off a multi-TiB Postgres migration for one of our clients this morning. We're moving them out of Supabase and onto a bare-metal multi-AZ Postgresql cluster in Hetzner.
I'm going to say what I always say here - for so many SME's the hyperscaler cloud provider has been the safe default choice. But as time goes on a few things can begin to happen. Firstly, the bills grow in both size and variability, so CFOs start to look increasingly askance at the situation. Secondly, so many technical issues start to arise that would simply vanish on fixed-size bare-metal (and the new issues that arise are well addressed by existing tooling). So the DevOps team can find themselves firefighting while the backlog keeps growing.
The problem really is one of skills and staffing. The people who have both the skills and desire actually implement and maintain the above tend to be the greying-beards who were installing RedHat 6 in their bedrooms as teenagers (myself included). And there are increasingly few of us who are not either in management and/or employed by the cloud providers.
So if companies can find the staff and the risk appetite, they can go right ahead and realise something like a 90% saving on their current spend. But that is unusual for an SME.
So we started Lithus[0] to do this for SMEs. We _only_ offer a 50% saving, not 90%. But take on all the risk and staffing issues. We don't charge for the migration, and the billing cycle only starts once migration is complete. And we provide a fixed number of engineering days per month included. So you get a complete Kubernetes cluster with open source tooling, and a bunch of RedHat-6-installing greying-beards to use however you need. /pitch
I've been trying to start a very similar thing around here (Spain) at first specializing a bit on backups and storage, still working very small as an independent contractor while I keep my daily job (at least for now, I'm testing the waters...).
I don't really totally miss the days where I had to configure multipath storage with barely documented systems ("No, we don't support Suse, Debian, whatever...", "No, you don't pay for the highest support level, you can't access the knowledge base..."), or integrate disparate systems that theoretically were using an open standard but was botched and modified by every vendor (For example DICOM. Nowadays the situation is way better.) or other nightmare situations. Although I miss accessing the lower layers.
But I've been working for years with my employers and clients cloud providers, and I've seen how the bills climb through the roof, and how easy is to make a million-dollar mistake, how difficult (and expensive) is to leave in some cases, and how the money and power is concentrated in a handful of companies, and I've decided that I should work on that situation. Although probably I'll earn less money, as the 'external contractor' situation is not that good in Spain as in some other countries, unless you're very specialized.
But thankfully, the situation is in some cases better than in the 00s: documentation is easier to get, hardware is cheaper to come by and experiment or even use it for business, WAN connections are way cheaper...
Just curious: Did you move to self-hosted Supabase? Or migrated to the underlying OSS equivalents for each feature/service?
I find Supabase immensely helpful to minimize overhead in the beginning, but would love to better understand where it starts breaking and how hard an eventual migration would be.
In this particular case the only need was for Postgres migration, no other services needed.
The problems we've seen or heard about with Supabase are:
* Cost (in either magnitude or variability). Either from usage, or having to go onto their Enterprise-tier pricing for one reason or another
* The usual intractable cloud-oddities – dropped connections, performance speed-bumps
* Increased network latency (just the way it goes when data has to cross a network fabric. Its fast, but not as fast as your own private network)
* Scaling events tend not to be as smooth as one would hope
None of these are unique to Supabase though, they can simply all arise naturally from building infrastructure on a cloud platform.
Regarding self-hosted Supabase - we're certainly open to deploying this for our clients, we've been experimenting with it internally. Happy to chat with you or anyone who's interested. Email is adam@ company domain.
Sure, EBS or any network-attached storage is expected to be a lot slower than a local SSD for synchronous writes or random reads, as there is a network stack in between. But my understanding is that for those usecases, you can use metal instances with local nvme. (ephemeral though)
Although the video is correct in the sense that AWS is vastly overpriced compared to most other cloud/VPS providers, the title is wrong: OP is not using a dedicated server (see 2:40 of the video) -- he is using a shared VPS. Hetzner sell proper dedicated servers, whether bare metal or virtualized.
I believe their bare metal servers should have even better price/perf ratio, but I don't have data to back that up.
I do like watching these comparisons however it reminds me of a conversation I had recently with my 10 year old.
Son: Why does the croissant cost €2.80 here while it's only €0.45 in Lidl? Who would buy that?
Me: You're not paying for the croissant, you're paying for the staff to give it to you, for the warm café, for the tables to be cleaned and for the seat to sit on.
So for enough people the price is not an issue. Someone else is paying.
On other side. People are pretty bad at this sort of cost analysis. I fall on this issue, prefer to spend more time myself on something I should just recommend to buy.
Exactly. The whole promise behind cloud was "you don't need an ops team". Now go check for yourself if that's true: go to your favorite jobs portal and search for AWS, or to include Azure and Github sysadmins search for "devops engineer". And for laughs search for "IAM engineer", which is a job only about managing permissions for users (not deciding about permissions, JUST managing them and fixing problems, nothing more. And frankly, the cloud is to blame: figuring out correct permissions now requires teams of PhDs to do correctly on infuriating web interfaces. I used to think Active Directory permissions were bad. I was wrong. The job portals show: no corporate department should ever go without a team of IAM engineers who are totally not really sysadmins)
What do you get for this? A redundant database without support (because while AWS support really tries so hard to help that I feel bad saying this, they don't get time to debug stuff, and redundant databases are complicated whether or not you use the cloud). You also get S3 distributed storage, and serverless (which is kind of CGI, except using docker and AWS markups to make one of most efficient stateless ways to run code on the web really expensive). Btw: for all of these better open source versions are available as a helm chart, with effectively the same amount of support.
You can use vercel to get out from under this, but that only works for small companies' "I need a small website" needs. It cannot do the integration that any even medium sized company requires.
Oh, and you get Amazon TLA, which is another brilliant amazon invention: during the time it takes you to write a devops script Amazon TLA comes up with another three-letter AWS service that you now have to use, because one of the devs wants it on his resume, is 2x as expensive as anything else, doesn't solve any problem and you now have to learn. It's all about using AI for maximizing uselessness.
And you'll do all this on Amazon's patented 1994-styled webpages because even claude code doesn't understand the AWS CLI. And the GCP and Azure ones are somehow worse (their websites look a lot nicer though, I'll readily admit that. But they're not significantly more functional)
Conclusion: while cloud has changed the job of sysadmin somewhat, there is no real difference, other than a massive price increase. Cloud is now so expensive that, for a single month's cloud services, you can buy hardware and put it on your desk. As the youtube points out, even an 8GB M1 mac mini, even a chinese mini-pc with AMD, runs docker far better than the (now reduced to 2GB memory) standard cloud images.
More often than not, I’d rather avoid the self-focused staff who rarely give it to you with hygiene in mind and at this time of the year in the northern hemisphere are likely to be sick, the mediocre coffee (price surge in coffee beans), and the dirty tables at a café, and the uncomfortable seating. And it’s rather 5€ for the croissant alone, in many places these days. Lidl’s croissants aren’t very good but they’re only marginally less good than what you can hope for at a café. McDonald’s croissants in Italy are quite ok by the way.
Me too, but it's worth remembering that's not the case for everyone. Some people want to have a little chat with the person at the counter, sit down for 5 mins in the corner of the cafe and eat their croissant. 5 euro can be a good price if that's what you want, and it doesn't matter if the lidl croissant is free, it will still be disappointing to the person who wants the extras.
Absolutely. I believe what I wanted to convey is that there are trade-offs in every decision that you can make. Maybe that’s even the point of a decision in the first place.
This in turn means that you always have several options, and more importantly you can invent a new way to enjoy the experience you hope to get from that interaction at a café in your mind, maybe a scene from your past or from a movie, which you’re no longer as likely to experience on average.
That said, I’ve got a favorite café where I used to spend time frequently. But their service deteriorated. And the magic is gone. So I moved on with my expectations.
Back to the analogy with the hyperscalers. I had bad experience with Azure and GCP, I’ve experienced the trade-offs of DigitalOcean and Linode and Hetzner, and of running on-premises clusters. It turned out, I’m the most comfortable with the trade-offs that AWS imposes.
Exactly. AWS has its own quirks and frustrations, sure but at the end of the day, I’m not using AWS just for raw compute. I’m paying for the entire ecosystem around it: security and access management, S3, Lambda, networking, monitoring, reliability guarantees, and a hundred little things that quietly keep the lights on.
People can have different opinions on this, of course, but personally, if I have a choice, I'd rather not be juggling both product development and the infrastructure headaches that come with running everything myself. That trade-off isn’t worth it for me.
IIRC, when the cloud services were taking over the argument was that it’s much cheaper to pay for the AWS than paying engineers to handle the servers. This was also a popular argument for running an unoptimized code(i.e. it’s much cheaper to run two servers instead of making your code twice as fast).
Since the industry has matured now, there must be a lot of opportunity to optimize code and run it on bare metal to make systems dramatically faster and dramatically cheaper.
If you think about it, the algorithms that we run to deliver products are actually not that complicated and most of the code is about accommodating developers with layers upon layers of abstraction.
When you're a solo SaaS developer/company owner, the dedicated server option really shines. I get a 10x lower price and no downsides that I've ever seen.
"But are your database backups okay?" Yeah, I coded the backup.sh script and confirmed that it works. The daily job will kick up a warning if it ever fails to run.
"But don't you need to learn Linux stuff to configure it?" Yeah, but I already know that stuff, and even if I didn't, it's probably easier to learn than AWS's interfaces.
"But what if it breaks and you have to debug it?" Good luck debugging an AWS lambda job that won't run or something; your own hardware is way more transparent than someone else's cloud.
"But don't you need reproducible configurations checked into git?" I have a setup.sh script that starts with a vanilla Ubuntu LTS box, and transforms it into a fully-working setup with everything deployed. That's the reproducible config. When it's time to upgrade to the next LTS release (every 4 years or so), I just provision a new machine and run that script again. It'll probably fail on first try because some ubuntu package name changed slightly, but that's a 5-minute fix.
"But what about scaling?" One of my crazy-fast dedicated machines is equal to ~10 of your slow-ass VPSes. If my product is so successful that this isn't enough, that's a good problem to have. Maybe a second dedicated machine, plus a load balancer, would be enough? If my product gets so popular that I'm thinking about hundreds of dedicated machines, then hopefully I have a team to help me with that.
For example, if the service is using a massive dataset hosted on AWS such as Sentinel 2 satellite imagery, then the bandwidth and egress costs will be the driving factors.
I’ve come to believe that such comparisons usually come from people who don’t understand the trade-offs of AWS in production.
Each project has certainly its own requirements. If you have the manpower and a backup plan with blue/green for every infrastructure component, then absolutely harness that cost margin of yours. If it’s at a break even when you factor in specialist continuity - training folks so nothing’s down if your hardware breaks, then AWS wins.
If your project can tolerate downtime and your SREs may sleep at night, then you might profit less from the several niners HA SLOs that AWS guarantees.
It’s very hard and costly to replicate what AWS gives you if you have requirements close to enterprise levels. Also, the usual argument goes - when you’re a startup you’ll be happy to trade CAPEX for OPEX.
For an average hobby project maybe not the best option.
As for latency, you can get just as good. Major exchanges run their matching engines in AWS DCs, you can co-locate.
Why is this a video? I'm not going to watch it. I will read the AI summary of the transcript though:
The video argues that AWS is dramatically overpriced and underpowered compared to cheap VPS or dedicated servers. Using Sysbench benchmarks, the creator shows that a low-cost VPS outperforms AWS EC2 and ECS by large margins (EC2 has ~20% of the VPS’s CPU performance while costing 3× more; ECS costs 6× more with only modest improvements). ECS setup is also complicated and inconsistent. Dedicated servers offer about 10× the performance of similarly priced AWS options. The conclusion: most apps don’t need cloud-scale architecture, and cloud dominance comes from marketing—not superior value or performance.
Didn't even mention the difference in data costs, or S3 plus transfer, because then we'll be going into 2-orders-of-magnitude differences ...
Not to mention what happens when you pay per megabyte and someone ddos-es you. Cloud brought back almost all hosting antipatterns, and means denial-of-service attacks really should be renamed denial-of-wallet attacks. And leaving a single S3 bucket, a single Serverless function, a single ... available (not even open) makes you vulnerable if someone knows of figures out the URL.
Pricing in AWS is heavily dependent on whether you reserve the instance and for how long.
In my experience, if you reserve a bare metal instance for 3 years (which is the biggest discount), it costs 2 times the price of buying it outright.
I'm surprised to hear about the numbers from the video being way different, but then, it's a video, so I didn't watch it and can't tell if he did use the correct pricing.
You seem to insinuate that the correct pricing is using a 3 year commitment. That seems very much not logical to me considering the original promise of the cloud to be flexible, and to scale up and down on demand.
Elasticity is very expensive, in practice people only use it for one-off jobs, preferably using the cheaper unreliable "spot" instances (meaning the job must support being partially re-run to recover, which implies a complex job splitting and batching platform).
For traditional, always-on servers, you should reserve them for 3 years. You still have the ability to scale up, just not down. You can always go hybrid if you don't know what your baseline usage is.
Should you be designing for a single server to exist for 3 years when you have such elastic compute? Why not design for living on spot instances and get savings lower than hetzner with better performance? What about floating savings plans? There’s a ton left on the table here just to say ‘aws bad’ for some views
compare the worldwide latency, I released an app in the App Store, I got users from Japan to Saudi Arabia to the United States. AWS basically guarantees to reach anyone who speaks English low latency.
This is always an unfair comparison because for any realistic comparison you need to have two servers on two locations for georedundancy and need to pay for the premises and their physical security, too. For example, you need to pay for security locks with access log and a commercial security company, or you have to pay for co-location in a datacenter.
When you add up all these costs plus the electricity bill, I wager that many cloud providers are on the cheaper side due to the economy of scale. I'd be interested in such a more detailed comparison for various locations / setups vs cloud providers.
What almost never goes into this discussion, however, is the expertise and infrastructure you lose when you put your servers into the cloud. Your own servers and their infrastructure are MOAT that can be sold as various products if needed. In contrast, relying on a cloud provider is mostly an additional dependency.
> you need to have two servers on two locations for georedundancy
You also absolutely need this with EC2 instances, which is what the comparison was about. So no, it's not unfair.
If you're using an AWS service built on top of EC2, Fargate, or anything else, you WILL see the same costs (on top of the extremely expensive Ops engineer you hire to do it, of course).
> need to pay for the premises and their physical security, too [...] plus the electricity bill
...and all of this is included in the Hetzner service.
Once again comments conflating "dedicated server" with "co-location".
AWS counts as managed servers with constant security monitoring. That's a huge difference to paying for a dedicated server where you're responsible for the installation and maintenance of the operating system and all software, intrusion detection and thread responses, and server monitoring.
I am a Hetzner customer for my forthcoming small company in order to keep running costs low, but it's not as if companies using AWS were irrational. You get what you pay for.
> AWS counts as managed servers with constant security monitoring. That's a huge difference to paying for a dedicated server where you're responsible for the installation and maintenance of the operating system and all software, intrusion detection and thread responses, and server monitoring.
This has absolutely nothing to do with "georedundancy" or "physical security" or "electricity".
The point of having a private chef is so you don’t have to cook food by yourself.
It’s still extremely useful to know if the private chef is cheaper or more expensive than cooking by yourself and by how much, so you can make a decision more aware of the trade offs involved.
The problem with this discussion is that a lot of people on these threads work as overpaid assistants to the one private chef, but also have never cooked at home.
Translating:
A lot of people work with AWS, are making bank, and are terrified of their skill set being made obsolete.
They also have no idea what it means to use a dedicated server.
That’s why we get the same tired arguments and assumptions (such as the belief that bare-metal means “server room here in the office”) in every discussion.
One of the least insightful comments I’ve seen in my 16 years here. “it’s because everyone here is dumb and knows it, and they are panicking and lying because they don’t want you to blow up their scam.”
I'm not calling anyone dumb. It is fine to not have experience with Hetzner or to have only with AWS. It's fine for someone to not know how to cook at home.
About people who work with it, I'm just alluding to the famous quote "It is difficult to get a man to understand something when his salary depends upon his not understanding it".
it's not interesting as a standalone question indeed. The question is, what do you enable by having a private chef?
Is it the fact that you don't want to spend the time cooking? or is it cooking plus shopping plus cleaning up after?
Or is it counting the time to take cooking lessons? and including the cost of taking the bus to those cooking lessons?
Does the private chef even use your house, or their own kitchen? Or can you get a smaller house without a kitchen alltogether? Especially at the rate of kitchen improvement, where kitchens don't last 20 years anymore, you're gonna need a new kitchen every 5 years. (granted the analogy is starting to fail here, but you get my point)
Big companies have been terrible at managing costs and attributing value. At least with cloud the costs are somewhat clear. Also, finding staff that is skilled is a considerable expense for businesses with a more than a few pieces of code, and takes time, you can't just get them on a whim and get rid of them.
> The entire point of AWS is so you don't have to get a dedicated server.
Yet every company I've worked for still used at least a bunch of AWS VPS exactly as they would have used dedicated servers, just for ten times the cost.
I'm struggling to find a way to express my opinion about this video without seeming like a complete ass.
If the author's point was to make a low effort "ha ha AWS sucks" video, well sure: success, I guess.
Nobody outside of AWS sales is going to say AWS is cheaper.
But comparing the lowest end instances, and apparently, using ECS without seeming to understand how they're configuring or using it just makes their points about it being slower kind of useless. Yes you got some instances that were 5-10x slower than Hetzner. On it's own that's not particularly useful.
I thought, going in, that this was going to be along the lines of others I have seen, previously: you can generally get a reasonably beefy machine with a bunch of memory and local SSDs that will come in half or less the cost of a similar spec EC2 instance. That would've been a reasonable path to go. Add on that you don't have issues with noisy neighbors when running a dedicated box, and yeah - something people can learn from.
But this... Yeah. Nah. Sorry
Maybe try again but get some help speccing out the comparison configuration from folks who do have experience in this.
Unfortunately it will cost more to do a proper comparison with mid-range hardware.
What is the point you are trying to make? Are you saying that we would need to have someone on payroll to have a usable machine? Then why not just have... a SysAdmin?
Shared instances is something even European "cloud" providers can do so why is EC2 so much more expensive and slower?
Because people aren't going on AWS for EC2, they go on it to have access to RDS, S3, EKS, ELB, SNS, Cognito, etc. Large enough customers also don't pay list price for AWS.
A lot of people do use AWS for EC2.
Of the services you list, S3 is OK. I would rather admin an RDBMS than use RDS at small scale
> Large enough customers also don't pay list price for AWS.
At that scale the cost savings on not hiring sysadmins becomes much smaller, so what is the case for using AWS? The absolute cost savings will be huge.
The last 18 years of tech companies I’ve worked for used AWS EC2, every single one.
I'm saying that if you do want to compare two different platforms on performance, it should probably be done in consultation with someone who has worked with it before.
To use an analogy it's like someone who's never driven a car, and really only read some basic articles about vehicles deciding to test the performance of two random vehicles.
Maybe one of them does suck, and is overpriced - but you're not getting the full picture if you never figured out that you've been driving it in first gear the whole time.
Isn't the point of using AWS that it's never in 1st gear?...
A better presentation would be to have someone make the best performance/price on AWS EC2, then someone else make the best performance/price on Hetzner and compare.
I myself used EC2 instances with locally attached NVMe drives with (mdadm) RAID-0 on BTRFS that was quite fast. It was for a CI/CD pipeline so only the config and the most recent build data needed to be kept. Either BTRFS or the CI/CD database (PostgreSQL I think) would eventually get corrupted and I'd run a rebuild script a few times a year.
For an older but more to the point comparison, see this: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...
Great link. I find my instances are more sensitive to RAM pricing than CPU performance as my needs are generally more IO-bound than CPU.
I'm pretty sure this was posted on HN a while ago, and that I read it then.
It's good - makes it's point well.
2019, very pre-Graviton
i made a similar comment on the video a week ago. It is an AWFUL analysis, in almost every way. Which is shocking, because its so easy to show that AWS is overpriced and underpowered.
At this point managing AWS, Azure or other cloud providers is as complicated or more complicated than managing your own but at an enormous cost multiplier and if you have steady traffic workloads I'm not sure it makes sense for most companies other than burning money. You still need to pay a sysadmin to manage the cloud and the complexity of the ecosystem is pretty brutal. Combine that with random changes in shit that causes problems like when we got locked out of our Azure account because they changed how part of their roles system works. I've also seen people not understanding the complexity of permissions etc and giving way to much access to people who should not have access.
This comment comes across as someone who is either trying to flog AWS to customers or someone who has to justify a job that depends on AWS.
Well, you're entitled to your opinion.
For what it's worth - my day job does involve running a bunch of infrastructure on AWS. I know it's not good value, but that's the direction the organisation went long before I joined them.
Previous companies I worked for had their infrastructure hosted with the likes of Rackspace, Softlayer, and others. Every now and then someone from management would come back from an AWS conference saying how they'd been offered $megabucks in AWS Credit if only we'd sign an agreement to move over. We'd re-run the numbers on what our infrastructure would cost on AWS and send it back - and that would stop the questions dead every time.
So, I'm not exactly tied to doing it one way or another.
I do still think though that if you're going to do a comparison on price and performance between two things, you should at least be somewhat experienced with them first, OR involve someone who is.
The author spun up an ECS cluster and then is talking about being unsure of how it works. It's still not clear whether they spun up Fargate nodes or EC2 instances. There's talk of performance variations between runs. All of these things raise questions about their testing methodology.
So, yeah, AWS is over-priced and under-performing by comparison with just spinning up a machine on Hetzner.
But at least get some basics right. I don't think that's too much to ask.
On the "value" question, it's worth considering why so many tech savvy firms with infra-as-code chops remain with GCP or AWS. It's unlikely, given how such firms work, they find no value in this.
FWIW, I firmly believe non "cloud native" platforms should be hosted using PXE-booted bare metal withing the physical network constructs that cloud provider software-defined-network abstractions are designed to emulate.
Bingo.
> "you're holding it wrong" for x10 the price
Ooof. Not a good look.
S3 is also 10x more expensive than this single consumer grade second hand hard drive I have.
Managed NAT gateways are also 10000x more expensive than my router.
This is a boring argument that has been done to death.
As a CTO of a number of small startups, I am still struggling to understand what exactly AWS and other cloud providers give you to justify the markup.
And yes we’ve been heavy users of both AWS and Google Cloud for years, mainly because of the credits they initially provided, but also used VMs, dedicated servers and other services from Hetzner and OVH extensively.
In my experience, in terms of availability and security there’s not much difference in practice. There are tons of good tools nowadays to treat a physical server or a cluster of them as a cloud or a PaaS, it’s not really more work or responsibility, often it is actually simpler depending on the setup you choose. Most workloads do not require flexible compute capability and it’s also easy and fast to get it from these cheaper providers when you need to.
I feel like the industry has collectively accepted that Cloud prices are a cost of doing business and unquestionable, “nobody ever got fired for choosing IBM”. Thinking about costs from first principles is an important part of being an engineer.
When your cheap dedicated server goes down and your admin is on holiday and you have hundreds of angry customers calling you, you'll get it.
Or you need to restore your Postgres database and you find out that the backups didn't work.
And finally you have a brilliant idea of hiring a second $150k/year dev ops admin so that at least one is always working and they can check each other's work. Suddenly, you're spending $300k on two dev ops admins alone and the cost savings of using cheaper dedicated servers are completely gone.
When your AWS bill suddenly spikes to $69k because some data science intern left a huge gpu backed EC2 instance running in ap-southeast-2 with a misconfigured auto-scaling group, and your CTO is at a "digital transformation" conference, and you have hundreds of angry investors asking why your burn rate tripled, you’ll get it.
Or you need to debug why your Lambda function is throttling and you find out that the CloudWatch logs were never properly configured and you’ve been flying blind for three months.
And finally you have a brilliant idea of hiring a second $150k/year AWS solutions architect so that at least one person can actually understand the bill and they can check each other’s Terraform configs. Suddenly, you’re spending $300k on two cloud wizards alone and the cost savings of "not managing your own infrastructure" are completely gone.
The snide rebuttal basically writes itself.
[dead]
[dead]
> When your cheap dedicated server goes down and your admin is on holiday and you have hundreds of angry customers calling you, you'll get it.
Or when you need to post on Hackernews to get support from your cloud provider as locked out of your account, being ignored and the only way to get access is try to create as much noise as possible it gets spotted.
Or your cloud provider wipes your account and you are a $135B pension fund [1]
Or your cloud portfolio is so big you need a "platform" team of multiple devops/developer staff to build wrappers around/package up your cloud provider for you and your platform team is now the bottleneck.
Cloud is useful but it's not as pain free as everyone says when comparing with managing your own, it still costs money and work. Having worked on several cloud transformations they've all cost more and taken more effort than expected. A large proportion have also been canned/postponed/re-evaluated due to cost/size/time/complexity.
Unless you are a big spender with dedicated technical account manager, your support is likely to be as bad as a no name budget VPS provider.
Both cloud and traditional hosting have their merits and place.
[1] https://arstechnica.com/gadgets/2024/05/google-cloud-acciden...
https://www.reddit.com/r/hetzner/comments/1ha5qgk/hetzner_ca...
It is statistically far more likely that your cloud service will go down for hours or days, and you will have no recourse and will just have to wait till AWS manage to resolve it.
I suspect that this is really about liability. When AWS goes down you can just throw up your hands, everyone's in the same boat. If your own server goes down you worry that your customers doubt your competence.
It's actually kinda frustrating - as an industry we're accepting worse outcomes due to misperceptions. That's how the free market goes sometimes.
Nobody gets fired for hiring IBM. This is the new version, when you go down because AWS did its someone else’s fault. Of course AWS will compare their downtime to industry standards for on premise and conclude they are down less often. On Premise engineers can say until they are blue that their downtime is on a Sunday at 3 am because it doesn't impact their customers it doesn't seem to matter.
When us-east-1 goes down, half the internet goes down with it.
Customers call and complain about downtime, I can just vaguely point at everything being on fire from Facebook to Instagram to online banking sites.
They get it.
When the self-hosted server fries itself, I'm on the hook for fixing it ASAP.
The difference is that if AWS goes down, I know for a fact that it'll be back up without me doing anything.
If my own dedicated server goes down, I'm going to need to call my admin at 3am 10 times just to wake him up.
You know that AWS will come back up. You definitely don’t know whether your own instances will come back or if you’ll need to redeploy it all.
After 8 years operating like this, I have had approximately the same number of critical outages in standard Cloud as with these providers.
One included a whole OVH building burning down with our server in it, and recovery was faster than the recent AWS and Cloudflare outages. We felt less impotent and we could do more to mitigate the situation.
If you want to, these providers also offer VMs, object storage and other virtualized services for way cheaper with similar guarantees, they are not stuck in the last century.
And I don’t know how people are using cloud, but most config issues happen above the VM/Docker/Kubernetes level, which is the same wether you are on cloud or not. Even fully managed database deployments or serverless backends are not really that much simpler or less error-prone than deploying the containers yourself. Actually the complexity of Cloud is often a worse minefield of footguns, with their myriad artificial quirks and limitations. Often dealing with the true complexities of the underlying open-source technologies they are reselling ends up being easier and more predictable.
This fearmongering is really weakening us as an industry. Just try it, it is not as complex or dangerous as they claim.
It is not only not that much more complex, it is often less complex.
Higher-level services like PaaS (Heroku and above) genuinely do abstract a number of details. But EC2 is just renting pseudo-bare computers—they save no complexity, and they add more by being diskless and requiring networked storage (EBS). The main thing they give you is the ability to spin up arbitrarily many more identical instances at a moment’s notice (usually, at least theoretically, though the amount of the time that you actually hit unavailability or shadow quotas is surprisingly high).
I'm a geek and I like to tinker with hardware. I want to maximum my $/hardware and have built a ton of DIY computers myself since I was young. I'm all about getting the most hardware for the money.
But I'd like to sleep at night and the cost of AWS is not a significant issue to the business.
That’s fair enough but that’s a luxury position, if costs are not concern to you then there’s not much point in discussing the merits of different methods to manage infrastructure efficiently.
And yes of course such costs are nothing if you are thinking of $300K just on a couple sysadmins. But this is just a bizarre bubble in a handful of small areas in the US and I am not sure how it can stay like that for much longer in this era of remote work.
We built a whole business with $100K in seed and a few government grants. I have worked with quite a few world-class senior engineers happily making 40K-70K.
Don't get me wrong. If I'm starting a brand new business with my own money and no funding, I'd absolutely buying a cheap dedicated instance. In the past, AWS gave out generous credits to startups/new businesses. This is no longer the case.
Once my business requires reliability and I need to hire a dedicated person to manage, I'd absolutely move to the cloud. I personally like Digital Ocean/Render.
That's even worse when AWS goes down and myth of it never going down should be more than shattered by now
What prevents an EC2 instance from going down in exactly the same way? Other hosting providers offer automatic backup too - it's not an AWS exclusive feature.
Nothing. It's just that I'm not the one responsible to fix it at 3am.
So if you app enters a crash-loop and fails to start an AWS engineer comes in and fixes it? Because that has not been my experience..
The truth is that there's still a lot of things you have to handle, including cloud bugs and problems. And other problems you don't have to think about anymore, especially with fully managed, high-level PaaS- like services.
I ran a cloud backend service for a startup with users, using manged services, and we still had an on-call team. The cloud is not magic.
Then who is responsible to fix it?
AWS if it's AWS' fault.
You do realize that a server can "go down" for many other reasons than "the intern pulled the plug on it", right?
Sure, but AWS has more downtime than I do :-)
Yes but you go on holidays but AWS does not.
If we assume that you're a human being that sleeps, say 8 hrs/day, and not an LLM, that leaves you with 16 hours of consciousness a day, for an uptime of 66%. That's upsidedown nines. You don't even crack one nine of uptime. If we assume you've a diet of meth and cocaine, and only sleep 2 hours a day, that still puts you at only like 92% uptime.
> If we assume that you're a human being
I'll have you know I am a cantaloupe, you insensitive clod!
When your system goes down on AWS and your AWS admin is on holiday, you'll have the same problem.
What is your point?
Every team I have worked on so far, if using AWS you had 50-100% of the developers with the knowledge and credentials (and usually the confidence) to troubleshoot/just fix it/replace it.
Every team with dedicated hardware in a data center it was generally 1-2 people who would have fixed stuff quickly, no matter the size of the company (small ones, of course - so 10-50 devs). And that's with available replacement hardware.
I'm not even one of the "cloud is so great" people - but it you're generally doing software it's actually a lot less friction.
And while the ratio of cost difference may sound bad, it's generally not. Unless we're talkign huge scale, you can buy a lot of AWS crap for the yearly salary of a single person.
You said developers have the knowledge and credentials (and thus the work) of managing your infra, and a moment later basically asserted you're saving money on the salary for the sysadmin. This is the actual lie you got sold on.
AWS isn't going to help you setup your security, you have to do it yourself. Previously a sysadmin would do this, now it's the devs. They aren't going to monitor your database performance. Previously a sysadmin would do this, now it's the devs. They aren't going to setup your networking. Previously a sysadmin would do this, ...
Managing hardware and updating hosts is maybe 10% of the work of a sysadmin. You can't buy much on 1/10th of a sysadmins salary, and even the things you can, the quality and response time are generally going to be shit compared to someone who cares about your company (been there).
Yes, please continue explaining the job I did in the past to me.
It doesn't change anything, especially as I did not blatantly argue cloud=good,hardware=bad. That is a completely different question.
My point is that given some circumstances, you need a lot less specialized deep knowledge if all your software just works[tm] on a certain level of the stack upwards. Everyone knows the top 1/3 of the stack and you pay for the bottom 2/3 part.
I didn't mean to say "let's replace a sysadmin with some AWS stuff", my point was "100k per year on AWS makes a lot of small companies run".
Also my experience was with having hardware in several DCs around the world, and we did not have people there (small company, but present in at least 4 countries) - so we had to pay for remote hands and the experience was mostly bad . Maybe my bosses chose bad DCs, or maybe I'd trust sysadmins at "product companies" more than those working as remote hands at a hoster...
> Every team I have worked on so far, if using AWS you had 50-100% of the developers with the knowledge and credentials (and usually the confidence) to troubleshoot/just fix it/replace it.
is that because they were using AWS so hired people who knew AWS?
I would personally have far more confidence in my ability to troubleshoot or redeploy a dedicated server than the AWS services to replace it.
> Every team with dedicated hardware in a data center it was generally 1-2 people who would have fixed stuff quickly, no matter the size of the company (small ones, of course - so 10-50 devs). And that's with available replacement hardware.
There are lots of options for renting dedicated hardware, that the service provider will maintain,. Its still far cheaper than AWS. Even if you have redundancy for everything its still a lot cheaper.
I don't have an AWS admin. I assume a $2.4 trillion dollar company always has dev ops on call?
> “nobody ever got fired for choosing IBM”
Agreed, there's definitely a heavy element of that to it.
But, at the risk of again being labelled as an AWS Shill - there's also other benefits.
If your organisation needs to deploy some kind of security/compliance tools to help with getting (say) SOC2 certification - then there's a bunch of tools out there to help with that. All you have to do then is plug them into your AWS organisation. They can run a whole bunch of automated policy checks to say you're complying with whatever audit requirements.
If you're self-hosting, or using Hetzner - well, you're going to spend a whole lot more time providing evidence to auditors.
Same goes with integrating with vendors.
Maybe you want someone to load/save data for you - no problems, create an AWS S3 bucket and hand them an AWS IAM Role and they can do that. No handing over of creds.
There's a bunch of semi-managed services where a vendor will spin up EC2 instances running their special software, but since it's running in your account - you get more control/visiblity into it. Again, hand over an AWS IAM Role and off you go.
It's the Slack of IAAS - it might not be the fastest, it's definitely not the cheapest, and you can roll your own for sure. But then you miss out on all these integrations that make life easier.
What AWS gives you is the ability to spin up dozens if not thousands of hosts in a single click.
If you run your own hardware, getting stuff shipped to a datacenter and installed is 2 to 4 weeks (and potentially much longer based on how efficient your pipeline is)
What really needs thousands of hosts nowadays? Even if you have millions of users. Computers are plenty fast now and leveraging that is not any harder if you choose the right stack.
And even if you are building with microservices, most standard servers can handle dozens in a single machine at once. They are all mostly doing network calls with minimal compute. Even better actually if they are in the same host and the network doesn’t get involved.
If you want to, there are simple tools to hook a handful of them as a cluster and/or instantly spawn extra slightly costlier VMs in case of failure or a spike in usage, if a short outage is really a world-ending event, which it isn’t for almost every software system or business. These capabilities have not been exclusive to the major cloud providers for years.
Of course we are generalizing a lot by this point, I’d be happy to discuss specific cases.
If you own your own hardware, but you can provision a leased dedicated server from many different providers in an hour or three, and still pay far less than for comparable hardware from AWS.
That sounds like a good deal, what providers offer this?
OVH, Hetzner, Scaleway. Lots of smaller providers. Most people who offer dedicated servers do this.
Which is an awesome capability, if you need it.
I suspect that if you broke projects on AWS down by the numbers, the vast majority don't needed it.
There are other benefits to using AWS (and drawbacks) bit "easy scaling" isn't just premature optimisation because if you build something to do something it's never going to do that's not optimisation it's simply waste.
They need it at the beginning to get started quickly, then they don't but don't bother moving out.
Not too different from how many other lines of business get their clients in the door.
But on Hetzner, you can usually get a dedicated server installed and ready tomorrow.
Even sooner than that. I bought a dedicated server on auction and it was ready 2h after the payment.
Hetzner is oversold. It's not appropriate for production in the same sense that EC2 obviously is. It's fine for staging though.
Out of curiosity, how is a dedicated server oversold?
I'm not familiar with Hetzner personally, but maybe they mean the uplink? I've found that with some smaller providers, advertising 10Gbit, but you rarely get close to that speed in reality.
It makes sense to think about price in the context of your business. If your entire infra cost is a rounding error on your balance sheet, of course you would pick the provider with the best features and availability guarantees (choose IBM/AWS). If your infra cost makes up a significant percentage of your operating expenses, you will start spending engineering effort to lower the cost.
That's why AWS can get away with charging the prices they do, even though it is expensive, for most companies it is not expensive enough to make it worth their while to look for cheaper alternatives.
It’s often less about engineering effort and more about taking some small risks to try less mainstream (but still relatively mature) alternatives by reasoning from first principles and doing a bit of homework.
From our experience, you can actually end up in a situation that requires less engineering effort and be more stable, while saving on costs, if you dare to go to a bit lower abstraction layers. Sometimes being closer to the metal is simpler, not more complex. And in practice complexity is much more often the cause of outages rather than hardware reliability.
>>As a CTO of a number of small startups, I am still struggling to understand what exactly AWS and other cloud providers give you to justify the markup.
If you are having a company that warrants building a data center, then AWS does not add much.
Other wise you face the 'if you want to build apple pie from scratch, you need to first invent the universe' problem. Simply put you can get started right on day one, in a pay as you go model. Like you can write code, deploy and ship from the very first day, instead of having to go deep down the infrastructure rabbit hole.
Plus shutting down things is easy as well. Things don't workout? Good news! You can shut down the infrastructure that very day instead of having to worry about the capital expenditure spent to build infrastructure, and without having to worry about its use later.
Simply put, AWS is infrastructure you can hire and fire at will.
Herzner storage is a drop in replacement for S3. Even though there are some minor differences, it's not like the difference between a managed NAT and a router.
There's a distinction between just saying it's more expensive and saying it's slower at the same price. Compared to well spec'ed and administered dedicated servers, it's basically impossible to get the same performance from AWS (or other cloud services) at any price. Yes, there are advantages, scaling being the greatest one. But you won't get the same raw speed you can achieve with fast storage and processing in a single machine (or a tight network) through cloud services—probably at all, but certainly not for anywhere near the same price.
And if you are willing to pay, you can significantly over-provision dedicated servers, solving much of the scaling problem as well.
nat gateways not being free is criminal.
"AWS inflation"
I'm migrating my last AWS services to dedicated servers with Gitops. In principle, AWS give you a few benefits that are worth paying for. In practice, I have seen all of them to be massive issues. Price and performance are obviously bad. More annoying than that, their systems have arbitrary limitations that you may not be aware of because they're considered 'corner cases' -- e.g. my small use-case bumped against DNS limitation and the streaming of replies was not supported. Then, you have a fairly steep learning curve with their products and their configuration DSLs.
There are Gitops solution that give you all the benefits that are promised by it, without any of the downsides or compromises. You just have to bite the bullet and learn kubernetes. It may be a bit more of a learning curve, but in my experience I would say not by much. And you have much more flexibility in the precise tech stack that you choose, so you can reduce it by using stuff you're already know well.
This is exactly true, and is something we have built our business around. In fact, I just kicked-off a multi-TiB Postgres migration for one of our clients this morning. We're moving them out of Supabase and onto a bare-metal multi-AZ Postgresql cluster in Hetzner.
I'm going to say what I always say here - for so many SME's the hyperscaler cloud provider has been the safe default choice. But as time goes on a few things can begin to happen. Firstly, the bills grow in both size and variability, so CFOs start to look increasingly askance at the situation. Secondly, so many technical issues start to arise that would simply vanish on fixed-size bare-metal (and the new issues that arise are well addressed by existing tooling). So the DevOps team can find themselves firefighting while the backlog keeps growing.
The problem really is one of skills and staffing. The people who have both the skills and desire actually implement and maintain the above tend to be the greying-beards who were installing RedHat 6 in their bedrooms as teenagers (myself included). And there are increasingly few of us who are not either in management and/or employed by the cloud providers.
So if companies can find the staff and the risk appetite, they can go right ahead and realise something like a 90% saving on their current spend. But that is unusual for an SME.
So we started Lithus[0] to do this for SMEs. We _only_ offer a 50% saving, not 90%. But take on all the risk and staffing issues. We don't charge for the migration, and the billing cycle only starts once migration is complete. And we provide a fixed number of engineering days per month included. So you get a complete Kubernetes cluster with open source tooling, and a bunch of RedHat-6-installing greying-beards to use however you need. /pitch
[0] https://lithus.eu
I've been trying to start a very similar thing around here (Spain) at first specializing a bit on backups and storage, still working very small as an independent contractor while I keep my daily job (at least for now, I'm testing the waters...).
I don't really totally miss the days where I had to configure multipath storage with barely documented systems ("No, we don't support Suse, Debian, whatever...", "No, you don't pay for the highest support level, you can't access the knowledge base..."), or integrate disparate systems that theoretically were using an open standard but was botched and modified by every vendor (For example DICOM. Nowadays the situation is way better.) or other nightmare situations. Although I miss accessing the lower layers.
But I've been working for years with my employers and clients cloud providers, and I've seen how the bills climb through the roof, and how easy is to make a million-dollar mistake, how difficult (and expensive) is to leave in some cases, and how the money and power is concentrated in a handful of companies, and I've decided that I should work on that situation. Although probably I'll earn less money, as the 'external contractor' situation is not that good in Spain as in some other countries, unless you're very specialized.
But thankfully, the situation is in some cases better than in the 00s: documentation is easier to get, hardware is cheaper to come by and experiment or even use it for business, WAN connections are way cheaper...
Just curious: Did you move to self-hosted Supabase? Or migrated to the underlying OSS equivalents for each feature/service?
I find Supabase immensely helpful to minimize overhead in the beginning, but would love to better understand where it starts breaking and how hard an eventual migration would be.
In this particular case the only need was for Postgres migration, no other services needed.
The problems we've seen or heard about with Supabase are:
* Cost (in either magnitude or variability). Either from usage, or having to go onto their Enterprise-tier pricing for one reason or another * The usual intractable cloud-oddities – dropped connections, performance speed-bumps * Increased network latency (just the way it goes when data has to cross a network fabric. Its fast, but not as fast as your own private network) * Scaling events tend not to be as smooth as one would hope
None of these are unique to Supabase though, they can simply all arise naturally from building infrastructure on a cloud platform.
Regarding self-hosted Supabase - we're certainly open to deploying this for our clients, we've been experimenting with it internally. Happy to chat with you or anyone who's interested. Email is adam@ company domain.
10x sounds way off. Try something with good nvme disks and decent amount of ram. It should be 30x
Sure, EBS or any network-attached storage is expected to be a lot slower than a local SSD for synchronous writes or random reads, as there is a network stack in between. But my understanding is that for those usecases, you can use metal instances with local nvme. (ephemeral though)
Although the video is correct in the sense that AWS is vastly overpriced compared to most other cloud/VPS providers, the title is wrong: OP is not using a dedicated server (see 2:40 of the video) -- he is using a shared VPS. Hetzner sell proper dedicated servers, whether bare metal or virtualized.
I believe their bare metal servers should have even better price/perf ratio, but I don't have data to back that up.
I do like watching these comparisons however it reminds me of a conversation I had recently with my 10 year old.
Son: Why does the croissant cost €2.80 here while it's only €0.45 in Lidl? Who would buy that?
Me: You're not paying for the croissant, you're paying for the staff to give it to you, for the warm café, for the tables to be cleaned and for the seat to sit on.
Good example.
I also like the "why does a bottle of water cost $5 after security at airports" example.
You have no choice. You’re locked in and can’t get out.
Maybe that’s the better analogy?
On point.
We don't pay million $ bills on AWS to "hang out" in a cozy place. I mean, you can, but that's insanity.
Also. Your company is paying. You are not.
So for enough people the price is not an issue. Someone else is paying.
On other side. People are pretty bad at this sort of cost analysis. I fall on this issue, prefer to spend more time myself on something I should just recommend to buy.
The people cleaning and keeping the café warm are your Ops team.
AWS is just an extremely expensive Lidl.
EDIT: autocorrect typo, coffee to café
He means the location, not the fluid. My coffee better be hot, not warm.
Typo.
Exactly. The whole promise behind cloud was "you don't need an ops team". Now go check for yourself if that's true: go to your favorite jobs portal and search for AWS, or to include Azure and Github sysadmins search for "devops engineer". And for laughs search for "IAM engineer", which is a job only about managing permissions for users (not deciding about permissions, JUST managing them and fixing problems, nothing more. And frankly, the cloud is to blame: figuring out correct permissions now requires teams of PhDs to do correctly on infuriating web interfaces. I used to think Active Directory permissions were bad. I was wrong. The job portals show: no corporate department should ever go without a team of IAM engineers who are totally not really sysadmins)
What do you get for this? A redundant database without support (because while AWS support really tries so hard to help that I feel bad saying this, they don't get time to debug stuff, and redundant databases are complicated whether or not you use the cloud). You also get S3 distributed storage, and serverless (which is kind of CGI, except using docker and AWS markups to make one of most efficient stateless ways to run code on the web really expensive). Btw: for all of these better open source versions are available as a helm chart, with effectively the same amount of support.
You can use vercel to get out from under this, but that only works for small companies' "I need a small website" needs. It cannot do the integration that any even medium sized company requires.
Oh, and you get Amazon TLA, which is another brilliant amazon invention: during the time it takes you to write a devops script Amazon TLA comes up with another three-letter AWS service that you now have to use, because one of the devs wants it on his resume, is 2x as expensive as anything else, doesn't solve any problem and you now have to learn. It's all about using AI for maximizing uselessness.
And you'll do all this on Amazon's patented 1994-styled webpages because even claude code doesn't understand the AWS CLI. And the GCP and Azure ones are somehow worse (their websites look a lot nicer though, I'll readily admit that. But they're not significantly more functional)
Conclusion: while cloud has changed the job of sysadmin somewhat, there is no real difference, other than a massive price increase. Cloud is now so expensive that, for a single month's cloud services, you can buy hardware and put it on your desk. As the youtube points out, even an 8GB M1 mac mini, even a chinese mini-pc with AMD, runs docker far better than the (now reduced to 2GB memory) standard cloud images.
I used to believe that, but in the enterprise we now we have teams on client-side cloud engineers to manage our AWS/Azure/GCP infra!
More often than not, I’d rather avoid the self-focused staff who rarely give it to you with hygiene in mind and at this time of the year in the northern hemisphere are likely to be sick, the mediocre coffee (price surge in coffee beans), and the dirty tables at a café, and the uncomfortable seating. And it’s rather 5€ for the croissant alone, in many places these days. Lidl’s croissants aren’t very good but they’re only marginally less good than what you can hope for at a café. McDonald’s croissants in Italy are quite ok by the way.
Me too, but it's worth remembering that's not the case for everyone. Some people want to have a little chat with the person at the counter, sit down for 5 mins in the corner of the cafe and eat their croissant. 5 euro can be a good price if that's what you want, and it doesn't matter if the lidl croissant is free, it will still be disappointing to the person who wants the extras.
Absolutely. I believe what I wanted to convey is that there are trade-offs in every decision that you can make. Maybe that’s even the point of a decision in the first place.
This in turn means that you always have several options, and more importantly you can invent a new way to enjoy the experience you hope to get from that interaction at a café in your mind, maybe a scene from your past or from a movie, which you’re no longer as likely to experience on average.
That said, I’ve got a favorite café where I used to spend time frequently. But their service deteriorated. And the magic is gone. So I moved on with my expectations.
Back to the analogy with the hyperscalers. I had bad experience with Azure and GCP, I’ve experienced the trade-offs of DigitalOcean and Linode and Hetzner, and of running on-premises clusters. It turned out, I’m the most comfortable with the trade-offs that AWS imposes.
AWS feels more like Lidl though...
Exactly. AWS has its own quirks and frustrations, sure but at the end of the day, I’m not using AWS just for raw compute. I’m paying for the entire ecosystem around it: security and access management, S3, Lambda, networking, monitoring, reliability guarantees, and a hundred little things that quietly keep the lights on.
People can have different opinions on this, of course, but personally, if I have a choice, I'd rather not be juggling both product development and the infrastructure headaches that come with running everything myself. That trade-off isn’t worth it for me.
I measured this years ago: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...
IIRC, when the cloud services were taking over the argument was that it’s much cheaper to pay for the AWS than paying engineers to handle the servers. This was also a popular argument for running an unoptimized code(i.e. it’s much cheaper to run two servers instead of making your code twice as fast).
Since the industry has matured now, there must be a lot of opportunity to optimize code and run it on bare metal to make systems dramatically faster and dramatically cheaper.
If you think about it, the algorithms that we run to deliver products are actually not that complicated and most of the code is about accommodating developers with layers upon layers of abstraction.
When you're a solo SaaS developer/company owner, the dedicated server option really shines. I get a 10x lower price and no downsides that I've ever seen.
"But are your database backups okay?" Yeah, I coded the backup.sh script and confirmed that it works. The daily job will kick up a warning if it ever fails to run.
"But don't you need to learn Linux stuff to configure it?" Yeah, but I already know that stuff, and even if I didn't, it's probably easier to learn than AWS's interfaces.
"But what if it breaks and you have to debug it?" Good luck debugging an AWS lambda job that won't run or something; your own hardware is way more transparent than someone else's cloud.
"But don't you need reproducible configurations checked into git?" I have a setup.sh script that starts with a vanilla Ubuntu LTS box, and transforms it into a fully-working setup with everything deployed. That's the reproducible config. When it's time to upgrade to the next LTS release (every 4 years or so), I just provision a new machine and run that script again. It'll probably fail on first try because some ubuntu package name changed slightly, but that's a 5-minute fix.
"But what about scaling?" One of my crazy-fast dedicated machines is equal to ~10 of your slow-ass VPSes. If my product is so successful that this isn't enough, that's a good problem to have. Maybe a second dedicated machine, plus a load balancer, would be enough? If my product gets so popular that I'm thinking about hundreds of dedicated machines, then hopefully I have a team to help me with that.
This all depends on the use-case.
For example, if the service is using a massive dataset hosted on AWS such as Sentinel 2 satellite imagery, then the bandwidth and egress costs will be the driving factors.
I’ve come to believe that such comparisons usually come from people who don’t understand the trade-offs of AWS in production.
Each project has certainly its own requirements. If you have the manpower and a backup plan with blue/green for every infrastructure component, then absolutely harness that cost margin of yours. If it’s at a break even when you factor in specialist continuity - training folks so nothing’s down if your hardware breaks, then AWS wins.
If your project can tolerate downtime and your SREs may sleep at night, then you might profit less from the several niners HA SLOs that AWS guarantees.
It’s very hard and costly to replicate what AWS gives you if you have requirements close to enterprise levels. Also, the usual argument goes - when you’re a startup you’ll be happy to trade CAPEX for OPEX.
For an average hobby project maybe not the best option.
As for latency, you can get just as good. Major exchanges run their matching engines in AWS DCs, you can co-locate.
Why is this a video? I'm not going to watch it. I will read the AI summary of the transcript though:
The video argues that AWS is dramatically overpriced and underpowered compared to cheap VPS or dedicated servers. Using Sysbench benchmarks, the creator shows that a low-cost VPS outperforms AWS EC2 and ECS by large margins (EC2 has ~20% of the VPS’s CPU performance while costing 3× more; ECS costs 6× more with only modest improvements). ECS setup is also complicated and inconsistent. Dedicated servers offer about 10× the performance of similarly priced AWS options. The conclusion: most apps don’t need cloud-scale architecture, and cloud dominance comes from marketing—not superior value or performance.
> Why is this a video? I'm not going to watch it.
There have also been a couple of thread in text based form about the same topic. Some like text, some like video.
Didn't even mention the difference in data costs, or S3 plus transfer, because then we'll be going into 2-orders-of-magnitude differences ...
Not to mention what happens when you pay per megabyte and someone ddos-es you. Cloud brought back almost all hosting antipatterns, and means denial-of-service attacks really should be renamed denial-of-wallet attacks. And leaving a single S3 bucket, a single Serverless function, a single ... available (not even open) makes you vulnerable if someone knows of figures out the URL.
Or the difference in effort predicting those costs
Pricing in AWS is heavily dependent on whether you reserve the instance and for how long.
In my experience, if you reserve a bare metal instance for 3 years (which is the biggest discount), it costs 2 times the price of buying it outright.
I'm surprised to hear about the numbers from the video being way different, but then, it's a video, so I didn't watch it and can't tell if he did use the correct pricing.
You seem to insinuate that the correct pricing is using a 3 year commitment. That seems very much not logical to me considering the original promise of the cloud to be flexible, and to scale up and down on demand.
Elasticity is very expensive, in practice people only use it for one-off jobs, preferably using the cheaper unreliable "spot" instances (meaning the job must support being partially re-run to recover, which implies a complex job splitting and batching platform).
For traditional, always-on servers, you should reserve them for 3 years. You still have the ability to scale up, just not down. You can always go hybrid if you don't know what your baseline usage is.
Should you be designing for a single server to exist for 3 years when you have such elastic compute? Why not design for living on spot instances and get savings lower than hetzner with better performance? What about floating savings plans? There’s a ton left on the table here just to say ‘aws bad’ for some views
This thing is due do vCPU overcommitting no ? AWS vCPU migth be a 1/8 of thread Hetzner 1/1 of thread
compare the worldwide latency, I released an app in the App Store, I got users from Japan to Saudi Arabia to the United States. AWS basically guarantees to reach anyone who speaks English low latency.
This is always an unfair comparison because for any realistic comparison you need to have two servers on two locations for georedundancy and need to pay for the premises and their physical security, too. For example, you need to pay for security locks with access log and a commercial security company, or you have to pay for co-location in a datacenter.
When you add up all these costs plus the electricity bill, I wager that many cloud providers are on the cheaper side due to the economy of scale. I'd be interested in such a more detailed comparison for various locations / setups vs cloud providers.
What almost never goes into this discussion, however, is the expertise and infrastructure you lose when you put your servers into the cloud. Your own servers and their infrastructure are MOAT that can be sold as various products if needed. In contrast, relying on a cloud provider is mostly an additional dependency.
A high-density cabinet in a datacenter costs $4k at most, including power and bandwidth.
That's nothing compared to an average AWS bill.
> you need to have two servers on two locations for georedundancy
You also absolutely need this with EC2 instances, which is what the comparison was about. So no, it's not unfair.
If you're using an AWS service built on top of EC2, Fargate, or anything else, you WILL see the same costs (on top of the extremely expensive Ops engineer you hire to do it, of course).
> need to pay for the premises and their physical security, too [...] plus the electricity bill
...and all of this is included in the Hetzner service.
Once again comments conflating "dedicated server" with "co-location".
AWS counts as managed servers with constant security monitoring. That's a huge difference to paying for a dedicated server where you're responsible for the installation and maintenance of the operating system and all software, intrusion detection and thread responses, and server monitoring.
I am a Hetzner customer for my forthcoming small company in order to keep running costs low, but it's not as if companies using AWS were irrational. You get what you pay for.
> AWS counts as managed servers with constant security monitoring. That's a huge difference to paying for a dedicated server where you're responsible for the installation and maintenance of the operating system and all software, intrusion detection and thread responses, and server monitoring.
This has absolutely nothing to do with "georedundancy" or "physical security" or "electricity".
I mean yeah they say that 1 vCPU == 1 hyper thread which is 10% of a CPU.
Atleast this is what they said years ago
now look at spot instance comparisons
Or reserved capacity instances.
I hate these comparisons because it's not apples to apples.
The entire point of AWS is so you don't have to get a dedicated server.
It's infra as a service.
I don’t understand your complaint.
The point of having a private chef is so you don’t have to cook food by yourself.
It’s still extremely useful to know if the private chef is cheaper or more expensive than cooking by yourself and by how much, so you can make a decision more aware of the trade offs involved.
The problem with this discussion is that a lot of people on these threads work as overpaid assistants to the one private chef, but also have never cooked at home.
Translating:
A lot of people work with AWS, are making bank, and are terrified of their skill set being made obsolete.
They also have no idea what it means to use a dedicated server.
That’s why we get the same tired arguments and assumptions (such as the belief that bare-metal means “server room here in the office”) in every discussion.
> such as the belief that bare-metal means “server room here in the office”
I remember the day I discovered some companies, and not just tech ones (Walmart, UPS, Toyota,…) actually own, operate, and use their own datacenters.
And there companies out there specialized in planning and building datacenters for them.
I mean, it’s kind of obvious. But it made me realize at how small a scale I both thought and operated.
Walmart does not want to use AWS because they are in direct competition.
I worked for a company that was attempting to sell software to walmart.
Check out how Wikipedia and the rest of the wikimedia universe is run.
One of the least insightful comments I’ve seen in my 16 years here. “it’s because everyone here is dumb and knows it, and they are panicking and lying because they don’t want you to blow up their scam.”
I'm not calling anyone dumb. It is fine to not have experience with Hetzner or to have only with AWS. It's fine for someone to not know how to cook at home.
About people who work with it, I'm just alluding to the famous quote "It is difficult to get a man to understand something when his salary depends upon his not understanding it".
Havibg a private chef is more like havibg hired people to manage your own hardware. Doordash is aws
it's not interesting as a standalone question indeed. The question is, what do you enable by having a private chef?
Is it the fact that you don't want to spend the time cooking? or is it cooking plus shopping plus cleaning up after?
Or is it counting the time to take cooking lessons? and including the cost of taking the bus to those cooking lessons?
Does the private chef even use your house, or their own kitchen? Or can you get a smaller house without a kitchen alltogether? Especially at the rate of kitchen improvement, where kitchens don't last 20 years anymore, you're gonna need a new kitchen every 5 years. (granted the analogy is starting to fail here, but you get my point)
Big companies have been terrible at managing costs and attributing value. At least with cloud the costs are somewhat clear. Also, finding staff that is skilled is a considerable expense for businesses with a more than a few pieces of code, and takes time, you can't just get them on a whim and get rid of them.
My feeling is that a more apt comparison would be "cooking yourself vs. ordering Doordash all the time" (aka "a taxi for your burrito)
Sure, that might be a more appropriate comparison.
The key point is that being aware of the cost trade off is useful.
Yes
Getting a chef would be hiring your own devops team
But the point of AWS is that you can buy these services with very fine granularity
>The point of having a private chef is so you don’t have to cook food by yourself.
With cloud, you hire a private chef and ALSO have to cook the food by yourself.
You don't hire a team to maintain the server infrastructure, but you hire a team to maintain cloud infrastructure.
> The entire point of AWS is so you don't have to get a dedicated server.
Yet every company I've worked for still used at least a bunch of AWS VPS exactly as they would have used dedicated servers, just for ten times the cost.
Then why does EC2 exist?