Question:
I assumed when I read the pitch, that I could spin up a managed K8s somewhere, like in Digital Ocean, and use this somehow. But after reading docs and comments, it sounds like this needs to manage my K8s for me?
I guess my question is:
1) When I spin up a "Cluster" on Hetzner, is that just dividing up a single machine, or is it a true K8s cluster that spans across multiple machines?
2) If I run this install script on another server, does it join the cluster, giving me true distributed servers to host the pods?
3) Is there a way to take an existing managed K8s and have Canine deploy to it?
czhu12 10 hours ago [-]
Yeah so at the moment it kind of supports two options:
1. A single Hetzner VPS
2. An existing Kubernetes cluster.
I usually use #1 for staging / development apps, and then #2 for production apps. For #2, I manage the number of nodes on the Digital Ocean side, and kubernetes just magically reschedules my workload accordingly (also can turn on auto scaling).
I think the thing that you're getting at that is not supported is having Canine create a multi-node cluster directly within Hetzner.
There is a terraform to create a Kubernetes cluster from hetzner, but this isn't currently installed on Canine.
I'm not closed to trying it out, there were a few UI improvements I wanted to take a shot at first, but at the moment Canine assume's you have a cluster ready to go, or can help you walk through a K3s installation to a single VPS.
Oh! This is good news!
I was not asking about K8s on Hetzner per se. I was asking if I could spin up a managed cluster (on Digital Ocean, etc) and use this on it. It sounds like I can, which is great! I think I missed that in the docs.
I’ve recently started an open-source self-hosted data platform (https://github.com/kot-behemoth/kitsunadata) with Dokku being a great initial deployment mode. It’s mature, simple to get started and has tons of docs / tutorials.
First - I really want something like this to exists and be great, so best of luck. As of today I'd consider this or Dokploy (Docker Swarm is underrated).
Small feedback - your "Why you should NOT use Canine" section actually is a net-negative for me. I actually was thinking it was cool that it may actually list downsides, but then you did a sarcastic thing that was annoying. I think you should just be frank - you'll have to purchase and manage servers, you'll be on the hook if they go down and have to get them back up, this is an early product made by one person, etc.
czhu12 10 hours ago [-]
Haha, well there goes my attempt to be different from the other landing pages out there. I'll take another stab, but appreciate the feedback!
1oooqooq 9 hours ago [-]
please keep it. it is awesome (and have to be said)! (but add the critical points too)
chrisweekly 11 hours ago [-]
100% agreed on both points.
czhu12 13 hours ago [-]
Would also add -- this has been by far the funnest project I've ever built. Owning the "tech stack" from top to bottom is a super satisfying feeling.
Rails app
Canine infra
Raspberry pi server
My own ISP
Was a tech stack I managed to get an app running on, for some projects I've kicked around.
vanillax 11 hours ago [-]
Nit pick. Kubernetes doesnt run docker containers. It run containers that conform to the Open Container Initiative ( OCI ) . Docker is a licensed brand name.
Plenty of larger clusters exist, but this usually requires extensive tuning (such as entirely replacing the API registry). And obviously the specific workload plays a large role. Kubernetes is actually quite far from supporting larger clusters out of the box, though most releases include some work in that direction.
czhu12 10 hours ago [-]
Ah yeah I could be wrong. In my early days at Airbnb, I recall someone doing an internal test to prove it could scale to 10k servers, but this was back in 2016 and I wasn't the one doing the test.
I'll walk that back
7 hours ago [-]
cchance 7 hours ago [-]
Yep i hate when i see docker required id ont run anything with docker anymore just podman and containerd for the most part
varun_chopra 23 minutes ago [-]
I think you do need to support being able to add more nodes to the Hetzner install. Then it'd be perfect.
conqrr 12 hours ago [-]
Very cool. I've looked into doing something similar for self hosting and have wanted something in between docker and Kubernetes. Nomad seemed like a good fit, but still a tad more work that dead simple docker and lack of ecosystem. I finally gave in to just using docker and living with deployment downtime on upgrades which is fine for a personal home server.
But for production services, I wonder how much of K8s does Canine really abstract? Do I ever need to peek underneath the hood? I'm no k8s expert, but I wonder if there is simply no happy medium between these two.
psviderski 9 hours ago [-]
I'm actually building something in between Docker and Kubernetes: https://github.com/psviderski/uncloud. Like you I wanted that middle ground without the operational overhead. It's basically Docker-like CLI and Docker Compose with multi-machine and production capabilities but no control plane to maintain.
Still in active development but the goal is to keep it simple enough that you can easily understand what's happening at each layer and can troubleshoot.
conqrr 9 hours ago [-]
Looks promising and exactly what I want solved. Adding wireguard and Caddy is slick. How are you planning to go about Zero Downtime deploy? Maybe emulate Swarm?
psviderski 9 hours ago [-]
Thanks! For zero-downtime deploys, it does simple rolling updates one container at a time in a similar way k8s or swarm does it. It starts the new container alongside the old one, waits for it to become healthy, Caddy picks it up and updates its config, then removes the old one.
The difference is that this process is driven by your CLI command (not a reconciliation loop in the cluster) so you get an instant feedback if something goes wrong.
stego-tech 12 hours ago [-]
I dig the concept! K8s is an amazing technology hampered by overwhelming complexity (flashback vibes to the early days of x86 virtualization), and thumbing through your literature it seems you’ve got a good grasp of the fundamentals everyone needs in order to leverage K8s in more scenarios - especially areas where PVE, Microcloud, or Cockpit might end up being more popular within (namely self-hosting).
I’ve got a spare N100 NUC at home that’s languishing with an unfinished Microcloud install; thinking of yanking that off and giving Canine a try instead!
czhu12 11 hours ago [-]
The part I found to be a little unwieldy at times was helm. It becomes a little unpredictable when you apply updates to the values.yaml file, which ones will apply, and which ones need to be set on start up. Also, some helm installations deploy a massive number of services, and it's confusing which ones are safe to restart when.
But, I've always found core kubernetes to be a delight to work with, especially for stateless jobs.
cyberpunk 11 hours ago [-]
i really don’t know where this complexity thing comes from anymore. maybe back in the day where a k8s cluster was a 2 hour kubespray run or something but it’s now a single yaml file and a ssh key if you use something like rke.
hombre_fatal 9 hours ago [-]
You are so used to the idiosyncrasies of k8s that you are probably blind to them. And you are probably so experienced with the k8s stack that you can easily debug issues so you discount them.
Not long ago, I was using Google Kubernetes Engine when DNS started failing inside the k8s cluster on a routine deploy that didn't touch the k8s config.
I hacked on it for quite some time before I gave up and decided to start a whole new cluster. At which point I decided to migrate to Linode if I was going to go through the trouble. It was pretty sobering.
Kubernetes has many moving parts that move inside your part of the stack. That's one of the things that makes it complex compared to things like Heroku or Google Cloud Run where the moving parts run in the provider's side of the stack.
It's also complex because it does a lot compared to pushing a container somewhere. You might be used to it, but that doesn't mean it's not complex.
esseph 9 hours ago [-]
Running large deployments on bare metal and managing the software and firmware lifecycle still has significant complexity. Modern tooling makes things much better - but it's not "easy".
The kubernetes iceberg is 3+ years old but still fairly accurate.
I was able to create a new service and deploy it with a couple of simple, ~8-line ymls and the cluster takes care of setting up DNS on a subdomain of my main domain, wiring up Lets Encrypt, and deploying the container. Deploying the latest version of my built container image was one kubectl command. I loved it.
vanillax 11 hours ago [-]
I was gonna echo this. K8s is rather easy to setup. Certificates, domains, CICD ( flux/argo ) is where some completely comes in.. If anyone wants to learn more I do have a video I think is the most straight forward yet productionalized capable setup for hosting at home.
nabeards 7 hours ago [-]
Looks like your video is k3s. Just a heads up to others hoping for a k8s bare metal setup.
i assume when people are talking about k8s complexity, it’s either more complicated scenarios, or they’re not talking about managed k8s.
even then though, it’s more that complex needs are complex and not so much that k8s is the thing driving the complexity.
if your primary complexity is k8s you either are doing it wrong or chose the wrong tool.
stego-tech 11 hours ago [-]
> or they’re not talking about managed k8s
Bingo! Managed K8s on a hyperscaler is easy mode, and a godsend. I’m speaking from the cluster admin and bare metal perspectives, where it’s a frustrating exercise in micromanaging all these additional abstraction layers just to get the basic “managed” K8s functions in a reliable state.
If you’re using managed K8s, then don’t @ me about “It’S nOt CoMpLeX” because we’re not even in the same book, let alone the same chapter. Hypervisors can deploy to bare metal and shared storage without much in the way of additional configuration, but K8s requires defining PVs, storage classes, network layers, local DNS, local firewalls and routers, etc, most of which it does not want to play nicely with pre-1.20 out of the box. It’s gotten better these past two years for sure, but it’s still not as plug-and-play as something like ESXi+vSphere/RHEL+Cockpit/PVE, and that’s a damn shame.
Hence why I’m always eager to drive something like Canine!
(EDIT: and unless you absolutely have a reason to do bare metal self-hosted K8s from binaries you should absolutely be on a managed K8s cluster provider of some sort. Seriously, the headaches aren’t worth the cost savings for any org of size)
esseph 9 hours ago [-]
I agree with all of this except for your bottom edit.
Nutanix and others are helping a lot in this area. Also really like Talos and hope they keep growing.
stego-tech 8 hours ago [-]
That’s fair! Nutanix impressed me as well when I was doing a hypervisor deep dive in 2022/2023, but I had concerns about their (lack of) profitability in the long run. VMware Tanzu wasn’t bad either, but was more of an arm-pull than AWS was for K8s. Talos is on my “to review” list, especially with their community license that let’s you manage small deployments like a proper Enterprise might (great evangelism idea, there), but moving everything to kube-virt was a nonstarter in the org at the time.
K8s’ ecosystem is improving by the day, but I’m still leaning towards a managed K8s cluster from a cloud provider for most production workloads, as it really is just a few lines of YAML to bootstrap new clusters with automated backups and secrets management nowadays - if you don’t mind the eye-watering bill that comes every month for said convenience.
esseph 8 hours ago [-]
If you work for any of the CISA-Identified 16 critical infrastructure sectors, one of their recommendations is for organizations to be able to expect to operate for more than 24h without an Internet connection.
Kinda hard to control real-world things with no Internet connection that rely on an internet connection
Note: Nutanix made some interesting k8s-related acquisitions in the last few years. If interested, you should take a look at some of the things they are working on.
stego-tech 5 hours ago [-]
If I were still in that role, I’d absolutely be keeping my Nutanix rep warm for a possible migration. Alas, I’m in another org building them a Win11 imaging pipeline for the time being, and Nutanix doesn’t want to play nice with my personal N100 NUCs for me to try their Community Edition.
nabeards 7 hours ago [-]
Exactly the same as you said. Nobody rents GPUs as cheap as I can get them for LLM work in-cluster.
matus_congrady 9 hours ago [-]
At https://stacktape.com, we're also in the same space. We're offering Heroku-like experience on top of your own AWS account.
I like what you're doing. But, to behonst, it's a tough market. While the promise of $265 vs $4 might seem like a no-brainer, you're comparing apples to oranges.
- Your DX is most likely be far from Heroku's. Their developer experience is refined by 100,000s developers. It's hard to think through everything, and you're very unlikely to make it anywhere close, once you go beyond simple use-cases.
- A "single VM" setup is not really production-grade. You're lacking reliability, scalability, redundancy and many more features that these platforms have. It definitely works for low-traffic side-projects. But people or entities that actually have a budget for something like this, and are willing to pay, are usually looking for a different solution.
That being said, I wish you all the luck. Maybe things change it the AI-generated apps era.
czhu12 8 hours ago [-]
Yeah I agree with you, but I think thats why maybe Kubernetes is a good place to work from. It already has a massive API with a pretty large ecosystem, so at the base level, the `kubectl` developer experience is about as good as any could be. K8 also makes it reasonably easy to scale to massive clusters, with good resilience, without too much of a hiccup
hardwaresofton 7 hours ago [-]
Hey, if you’re going to offer constructive feedback to a competitor, maybe don’t lead with a plug.
dabbz 4 hours ago [-]
How do the Addons work? Are they just pre-built Dockerfile s? Or is there more to them? I'd love to request mariadb if there's something special beyond just a dockerfile/docker image.
JeffMcCune 8 hours ago [-]
Is Google sheets backend (from the screenshot in the readme) what I think it is? Sheets API as a database?
If so props to you.
My original idea behind https://holos.run was to create a Heorku like experience for k8s so I’m super happy to see this existing in the world. I’d love to explore an integration, potentially spinning up the single or multi node clusters with cluster api.
The video example is a little confusing from a "I just want to self host the whole thing on a single machine/VM etc" perspective. If that's what I want to do, do I still have to create a cluster by putting in some managed DO K8s reference? Eg I just want it to use the local VM for the cluster. Do you have some other videos or can you make some that show say how to install it all-in-one on a single machine, and then from there how to add/deploy an app?
czhu12 10 hours ago [-]
Yeah, so the setup I have there is basically: for staging / non prod apps, I typically just boot a single Hetzner VPS, and then install a K8 compliant server, then I'm off to the races.
Production, I usually use digital ocean, so then I get a managed kubernetes, but also a managed postgres within the same data center for latency needs. Let's me sleep easier at night :)
indigodaddy 10 hours ago [-]
Gotcha, I just think a big part of your audience will want to one-box it for personal hobbyist apps and projects. Do you have documentation for your staging approach?
federicotdn 9 hours ago [-]
I can't get Project creation to work. I click on "Deploy from Docker Hub instead →", fill in the details (name, image, cluster), and when I click Submit, I'm taken to the Projects page again (empty).
Weird, I'll take a look, guessing theres a permission issue with reaching the docker hub. Is it a private repository? I think last time I tried deploying an image with Dockerhub, the token that was provisioned didn't have read access.
At the very least, theres. bug with showing a better error message so I'll do that now!
federicotdn 8 minutes ago [-]
Indeed, private repo
lotyrin 7 hours ago [-]
Any support (or timeline for support) for what heroku calls "review apps" (PRs automatically become ephemeral staged environment)? Skimming the docs and repo I didn't see any.
czhu12 2 hours ago [-]
Not yet, was on the roadmap, as well as Gitlab support. I was going to trickle out the updates over the next few weeks
rcarmo 12 hours ago [-]
I’m curious as to how storage and secrets are handled, since my recurring issue with Kubernetes is not deploying the containers or monitoring them but having a sane way to ensure a (re)deployed app or stack would use the same storage and/or multiple apps would put their data in consistent locations.
Also, having seen the demo video, it’s a happy path thing (public repo, has dockerfiles, etc. what about private code and images?)
film42 11 hours ago [-]
Looks great! I'm definitely in the market for something like this; and building on top of helm charts makes me want to try it out.
Can Canine automatically upgrade my helm charts? That would be killer. I usually stay on cloud-hosted paid plans because remembering to upgrade is not fun. The next reason is that I often need to recall the ops knowledge just after I've forgotten it.
czhu12 11 hours ago [-]
It can apply upgrades but I don't think it solves your core problem, which is how to perform upgrades safely. Most of the time its totally fine, but sometimes a config key changes across versions.
Upgrading helm charts without some manual monitoring seems like it might still be an unsolved problem :(
Everhusk 12 hours ago [-]
I've also always wondered why computing costs keep going down but cloud costs keep going up.
You deserve an award for building this, thank you.
You deserve an award for building this, thank you!
czhu12 11 hours ago [-]
Yeah in our last year at my previous company, we spent over ~$400k on what amounted to about ~512GB of memory across our fleet of 8 instances on a PaaS vendor. We were almost always memory bound, so I don't even know the compute side.
Given that the hardware itself now costs less than 8k to purchase outright, just seemed ridiculous. Albeit we did have SOC2, enterprise plan, etc, but it was a painful bill every quarter.
does it track and persist application state across redeploys or is every deploy treated as stateless chart apply? interested to how it handles config drift, secrets rotation and volume reuse without breaking existing pods
serbuvlad 12 hours ago [-]
On the webpage at the "Why you should NOT use Canine" section, it is possible to swipe away a card that is in the background, which is very weird UX.
Chrome 137. Android 13.
Other than that... I'll give it a shot. Have three N100 NUCs. Two are currently unused after failed attempts to learn to use k8s.
Maybe this'll do the trick.
wkat4242 5 hours ago [-]
Yeah the background card swiping is confusing. I understand why it happens but it should not be possible when only a tiny sliver of the card is shown.
asadawadia 5 hours ago [-]
How do you handle abuse/malicious user code being run on your server ?
vanillax 11 hours ago [-]
Can you make it so I can use my existing k8 cluster? or does it have to always use your embedded k3s? It seems like if you are just using k3s it should be easy to bring your own...
czhu12 2 hours ago [-]
Yep, its intended to be used with k8s clusters, k3s is more of an exceptional usecase but I wanted to be able to support super cheap, single instance machines.
I’ll try to make that more clear in the homepage / readme
the__alchemist 9 hours ago [-]
Is this simple to deploy to and use like Heroku, or do I need to use docking and kubertenes? I would love a cheaper Heroku!
bravesoul2 10 hours ago [-]
Brilliant if an entire industry becomes a commodity complement because some angry dev got fed up of paying $400. Love it!
reassess_blind 10 hours ago [-]
Is there an option to just drag drop project files, instead of using GitHub? That’s how I prefer to test these out.
lloydjones 11 hours ago [-]
I started using Coolify a few months ago — Do you have a “pitch” as to how Canine is different or better?
Good work either way!
czhu12 2 hours ago [-]
Yeah coolify is great, and they’re light years ahead on features.
Main difference is that coolify supports mostly single node deployments (I could be wrong about this). A single beefy vps could handle a massive amount of load, but for a lot of orgs, it doesn’t quite meet needs for resiliency, workloads, etc.
Canine is built on Kubernetes, so it can take advantage of the entire ecosystem, helm charts, third party add ons, etc.
A good example: I was able to deploy Dagster + clickhouse, hosted behind a VPN with telepresence, all with canine, with a few clicks.
I think for personal projects, coolify is more than enough though, and probably the better option at the moment. Most of my next year will just be adding features coolify already supports.
bitpush 11 hours ago [-]
Isnt Coolify a Vercel alternative? In that, you dont see any k8s primitives (services, load balancer etc).
In my head, I put coolify, vercel as higher in the stack. fly.io, heroku, canine to be closer to metal, and all the on the metal is k8s and friends.
lloydjones 11 hours ago [-]
That’s an interesting way of looking at it.
Though I do believe that Coolify will “eventually” support Kubernetes, I think what you’re saying feels right to me.
hardwaresofton 7 hours ago [-]
Congrats on the launch Chris! Product has come a long way
eviluncle 11 hours ago [-]
So nice to see modern open source being built with rails. it's the ultimate productivity framework
bpiroman 7 hours ago [-]
Just use Kamal and deploy to a Hetzner instance
konexis007 7 hours ago [-]
How does this compare with caprover or dokku?
mkesper 13 hours ago [-]
The README is confusing me, could you mention how Kubernetes is created from your docker compose setup?
czhu12 12 hours ago [-]
Yeah, documentaiton is one area I've been meaning to spend more time in. The application runs outside of Kubernetes, it effectively manages a Kubernetes cluster and exposes a "heroku" like interface.
This way, it doesn't result in having to host a separate app within your cluster, saving resources. I was kind of imagining this to be deploying to fairly small, indie hacker type setups, so making sure that it was resource efficient was relatively important.
The docker compose set up is just for local development. I'll make that more clear, thanks for the feedback!
the__alchemist 7 hours ago [-]
Does this require docker and kubernetes?
jambay 11 hours ago [-]
The interaction design looks fairly intuitive. Good luck!
utf_8x 12 hours ago [-]
Any particular reason why this runs outside the cluster?
czhu12 11 hours ago [-]
Main reason is that I wanted to be able to support single VPS K3s clusters all the way down to ~512MB.
K3s already takes up about 100MB which only leaves 400 for the application, so trying to run Canine on that machine would probably create too much bloat.
dboreham 11 hours ago [-]
Don't know anything about this project, but usually the reason is "because there could be N clusters", and sometimes "because I want the cluster to come and go frequently" (e.g. for testing workloads).
yarone 10 hours ago [-]
This is awesome, great work!
znpy 10 hours ago [-]
I used heroku manh years ago and i have fond memories of it.
I think the landing page fails at answering the two most basic questions:
1. Can i deploy via a stupid simple “git push” ?
2. Can i express what my workloads are via a stupid simple Procfile?
6r17 13 hours ago [-]
[dead]
fuckyah 10 hours ago [-]
[dead]
fuckyah 11 hours ago [-]
[dead]
tiffanyh 11 hours ago [-]
Isn't Heroku build on Erlang/OTP because it essentially has Kubernetes-like functionality out-of-box.
Also, your docs on how K8s works look really good, and might be the most approachable docs I've seen on the subject. https://canine.gitbook.io/canine.sh/technical-details/kubern...
Question: I assumed when I read the pitch, that I could spin up a managed K8s somewhere, like in Digital Ocean, and use this somehow. But after reading docs and comments, it sounds like this needs to manage my K8s for me? I guess my question is: 1) When I spin up a "Cluster" on Hetzner, is that just dividing up a single machine, or is it a true K8s cluster that spans across multiple machines? 2) If I run this install script on another server, does it join the cluster, giving me true distributed servers to host the pods? 3) Is there a way to take an existing managed K8s and have Canine deploy to it?
I usually use #1 for staging / development apps, and then #2 for production apps. For #2, I manage the number of nodes on the Digital Ocean side, and kubernetes just magically reschedules my workload accordingly (also can turn on auto scaling).
I think the thing that you're getting at that is not supported is having Canine create a multi-node cluster directly within Hetzner.
There is a terraform to create a Kubernetes cluster from hetzner, but this isn't currently installed on Canine.
I'm not closed to trying it out, there were a few UI improvements I wanted to take a shot at first, but at the moment Canine assume's you have a cluster ready to go, or can help you walk through a K3s installation to a single VPS.
https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne...
But it doesn't have support Helm charts.
Cloud computing architecture > Delivery links to SaaS, DaaS, DaaS, PaaS, IaaS: https://en.wikipedia.org/wiki/Cloud_computing_architecture
Cloud-computing comparison: https://en.wikipedia.org/wiki/Cloud-computing_comparison
Category:Cloud_platforms: https://en.wikipedia.org/wiki/Category:Cloud_platforms
awesome-selfhosted has a serverless / FaaS category that just links to awesome-sysadmin > PaaS: https://github.com/awesome-selfhosted/awesome-selfhosted#sof...
I collected a bunch of links while learning it, and launched https://github.com/kot-behemoth/awesome-dokku, as there wasn’t an “awesome” list.
Hope it helps someone!
This is a more featureful version.
Small feedback - your "Why you should NOT use Canine" section actually is a net-negative for me. I actually was thinking it was cool that it may actually list downsides, but then you did a sarcastic thing that was annoying. I think you should just be frank - you'll have to purchase and manage servers, you'll be on the hook if they go down and have to get them back up, this is an early product made by one person, etc.
Rails app Canine infra Raspberry pi server My own ISP
Was a tech stack I managed to get an app running on, for some projects I've kicked around.
I know this is just a general description, but “10,000 servers” —> Kubernetes actually only claims support up to 5,000 nodes: https://kubernetes.io/docs/setup/best-practices/cluster-larg...
Plenty of larger clusters exist, but this usually requires extensive tuning (such as entirely replacing the API registry). And obviously the specific workload plays a large role. Kubernetes is actually quite far from supporting larger clusters out of the box, though most releases include some work in that direction.
I'll walk that back
Still in active development but the goal is to keep it simple enough that you can easily understand what's happening at each layer and can troubleshoot.
I’ve got a spare N100 NUC at home that’s languishing with an unfinished Microcloud install; thinking of yanking that off and giving Canine a try instead!
But, I've always found core kubernetes to be a delight to work with, especially for stateless jobs.
Not long ago, I was using Google Kubernetes Engine when DNS started failing inside the k8s cluster on a routine deploy that didn't touch the k8s config.
I hacked on it for quite some time before I gave up and decided to start a whole new cluster. At which point I decided to migrate to Linode if I was going to go through the trouble. It was pretty sobering.
Kubernetes has many moving parts that move inside your part of the stack. That's one of the things that makes it complex compared to things like Heroku or Google Cloud Run where the moving parts run in the provider's side of the stack.
It's also complex because it does a lot compared to pushing a container somewhere. You might be used to it, but that doesn't mean it's not complex.
The kubernetes iceberg is 3+ years old but still fairly accurate.
https://www.reddit.com/r/kubernetes/comments/u9b95u/kubernet...
I was able to create a new service and deploy it with a couple of simple, ~8-line ymls and the cluster takes care of setting up DNS on a subdomain of my main domain, wiring up Lets Encrypt, and deploying the container. Deploying the latest version of my built container image was one kubectl command. I loved it.
even then though, it’s more that complex needs are complex and not so much that k8s is the thing driving the complexity.
if your primary complexity is k8s you either are doing it wrong or chose the wrong tool.
Bingo! Managed K8s on a hyperscaler is easy mode, and a godsend. I’m speaking from the cluster admin and bare metal perspectives, where it’s a frustrating exercise in micromanaging all these additional abstraction layers just to get the basic “managed” K8s functions in a reliable state.
If you’re using managed K8s, then don’t @ me about “It’S nOt CoMpLeX” because we’re not even in the same book, let alone the same chapter. Hypervisors can deploy to bare metal and shared storage without much in the way of additional configuration, but K8s requires defining PVs, storage classes, network layers, local DNS, local firewalls and routers, etc, most of which it does not want to play nicely with pre-1.20 out of the box. It’s gotten better these past two years for sure, but it’s still not as plug-and-play as something like ESXi+vSphere/RHEL+Cockpit/PVE, and that’s a damn shame.
Hence why I’m always eager to drive something like Canine!
(EDIT: and unless you absolutely have a reason to do bare metal self-hosted K8s from binaries you should absolutely be on a managed K8s cluster provider of some sort. Seriously, the headaches aren’t worth the cost savings for any org of size)
Nutanix and others are helping a lot in this area. Also really like Talos and hope they keep growing.
K8s’ ecosystem is improving by the day, but I’m still leaning towards a managed K8s cluster from a cloud provider for most production workloads, as it really is just a few lines of YAML to bootstrap new clusters with automated backups and secrets management nowadays - if you don’t mind the eye-watering bill that comes every month for said convenience.
Kinda hard to control real-world things with no Internet connection that rely on an internet connection
Note: Nutanix made some interesting k8s-related acquisitions in the last few years. If interested, you should take a look at some of the things they are working on.
I like what you're doing. But, to behonst, it's a tough market. While the promise of $265 vs $4 might seem like a no-brainer, you're comparing apples to oranges.
- Your DX is most likely be far from Heroku's. Their developer experience is refined by 100,000s developers. It's hard to think through everything, and you're very unlikely to make it anywhere close, once you go beyond simple use-cases.
- A "single VM" setup is not really production-grade. You're lacking reliability, scalability, redundancy and many more features that these platforms have. It definitely works for low-traffic side-projects. But people or entities that actually have a budget for something like this, and are willing to pay, are usually looking for a different solution.
That being said, I wish you all the luck. Maybe things change it the AI-generated apps era.
If so props to you.
My original idea behind https://holos.run was to create a Heorku like experience for k8s so I’m super happy to see this existing in the world. I’d love to explore an integration, potentially spinning up the single or multi node clusters with cluster api.
I wondered how easy it would be to vibe code something like this, turns out, super easy
the problem is that kubero, Idk they did not gain any traction.
maybe most user want simple tools like coolify
Energy on this project: https://github.com/czhu12/canine/graphs/code-frequency
Production, I usually use digital ocean, so then I get a managed kubernetes, but also a managed postgres within the same data center for latency needs. Let's me sleep easier at night :)
edit: looks like POST https://canine.sh/projects is returning 422.
At the very least, theres. bug with showing a better error message so I'll do that now!
Also, having seen the demo video, it’s a happy path thing (public repo, has dockerfiles, etc. what about private code and images?)
Can Canine automatically upgrade my helm charts? That would be killer. I usually stay on cloud-hosted paid plans because remembering to upgrade is not fun. The next reason is that I often need to recall the ops knowledge just after I've forgotten it.
Upgrading helm charts without some manual monitoring seems like it might still be an unsolved problem :(
You deserve an award for building this, thank you.
You deserve an award for building this, thank you!
Given that the hardware itself now costs less than 8k to purchase outright, just seemed ridiculous. Albeit we did have SOC2, enterprise plan, etc, but it was a painful bill every quarter.
Chrome 137. Android 13.
Other than that... I'll give it a shot. Have three N100 NUCs. Two are currently unused after failed attempts to learn to use k8s.
Maybe this'll do the trick.
I’ll try to make that more clear in the homepage / readme
Good work either way!
Main difference is that coolify supports mostly single node deployments (I could be wrong about this). A single beefy vps could handle a massive amount of load, but for a lot of orgs, it doesn’t quite meet needs for resiliency, workloads, etc.
Canine is built on Kubernetes, so it can take advantage of the entire ecosystem, helm charts, third party add ons, etc.
A good example: I was able to deploy Dagster + clickhouse, hosted behind a VPN with telepresence, all with canine, with a few clicks.
I think for personal projects, coolify is more than enough though, and probably the better option at the moment. Most of my next year will just be adding features coolify already supports.
In my head, I put coolify, vercel as higher in the stack. fly.io, heroku, canine to be closer to metal, and all the on the metal is k8s and friends.
Though I do believe that Coolify will “eventually” support Kubernetes, I think what you’re saying feels right to me.
This way, it doesn't result in having to host a separate app within your cluster, saving resources. I was kind of imagining this to be deploying to fairly small, indie hacker type setups, so making sure that it was resource efficient was relatively important.
The docker compose set up is just for local development. I'll make that more clear, thanks for the feedback!
K3s already takes up about 100MB which only leaves 400 for the application, so trying to run Canine on that machine would probably create too much bloat.
I think the landing page fails at answering the two most basic questions:
1. Can i deploy via a stupid simple “git push” ?
2. Can i express what my workloads are via a stupid simple Procfile?