I’ve setup a Minikube cluster that runs RabbitMQ and KEDA. The aim is to scale containers based on RabbitMQ messages in a single queue. The scaling mechanism works fine, whenever I send a message into the queue a container spins up. The problem is that this container is also a consumer of the same queue ..
I am trying to deploy next.JS on Kubernetes. But I also want to serve all the static files to get served from s3. Is there any way to do that? For Example. Currently, all the static files are being served from the local file system of the server. <link rel="preload" href="/_next/static/chunks/main.js?ts=1616564196729" as="script"> <link rel="preload" href="/_next/static/chunks/webpack.js?ts=1616564196729" ..
My application NodeJS works in kubernetes and has several instances. This application works with web sockets, and broadcasts events by means of radis, so that there is communication between instances. This application also sends a push notification via Firebase Cloud Messaging if the client is not currently connected via a web socket. When I have ..
I’m trying to send RabbitMQ messages from my host machine to a Minikube instance with a RabbitMQ cluster deployed. When running my send script, I get hit with this error: Handshake terminated by server: 403 (ACCESS-REFUSED) with message "ACCESS_REFUSED – Login was refused using authentication mechanism PLAIN. For details see the broker logfile. In the ..
Current behavior After upgrading from babel 6 to 7, Error occurred in PR deploy phrase. It was working normally in local build. This is what I see on pm2 log on K8s Cannot find module ‘@babel/runtime-corejs2/core-js/reflect/construct’ at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15) at Module.Hook._require.Module.require (/home/y/lib/node_modules/pm2/node_modules/require-in-the-middle/index.js:61:29) at require (internal/modules/cjs/helpers.js:25:18) at Object.<anonymous> (/home/y/share/node/manhattan_app/transpile/lib/errorHelper.js:3:26) at Module._compile (internal/modules/cjs/loader.js:778:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10) at ..
I have a WebSocket server and client implemented in nodejs, I will create multi-instance from the server using Kubernetes, if have 10 instances now from the server node and the clients will connect on it, and I will use the Nginx to distribute the requests of the clients on the server instances, say I have ..
my problem is that I need to increase NATS max_payload in production environment but it uses Kubernetes and I have no idea how to do that, I tried to use ConfigMap but I had not success. In local/dev environment it uses a NATS config file with docker so it works fine. Way to make it ..
Having built a static frontend, my thought was to deploy this to a static assets server like S3, while deploying my microservices backend using kubernetes to a compute service like EC2 or EKS. However, my research has only turned up one approach: building the frontend as a service in my kubernetes cluster. My questions: Is ..
So I have a project which builds 3 kubernetes pods which connect to each other, namely server, manager and browser, the browser part is implemented using Vue.js, others are using Python Flask. I have a configMap storing the address of the services of them, which is supposed to shared by the three: apiVersion: v1 kind: ..