How to Deploy your first Kubeless function?

The following points guides you in deploying your first kubeless function. Before diving lets understand about kubeless function and Triggers:

Kubeless function

Kubeless’s primary building block is a function. Kubeless allows functions to be created in a variety of languages, including go, python, ruby, and java. A function always receives two arguments when it is called via an HTTP call, cron trigger, etc. Situation and Background. One may think of an event as an input to the standard functions. On the other hand, context is the attribute that holds the metadata.

Triggers

Triggers are the piece of code that will automatically respond ( or invoke a function ) to events like an HTTP call, life-cycle events, or on a schedule. The triggers that are currently available in kubeless are

  1. HTTP Trigger
  2. CronJob Trigger
  3. Kafka Trigger
  4. NATS Trigger
  • We’re now ready to create a function. We’ll keep things easy by writing a function that says hello and echos back the data it gets.
  • Open your favorite IDE, create a file named hello.py and paste the below code:

Regardless of the language or event source, all functions in Kubeless have the same structure. Generally speaking, each function

  1. It receives an object event as the initial input. All of the event source’s information is contained in this option. The content of the function request should be included in the key ‘data’ specifically.
  2. It obtains a second object context containing general function information.
  3. It gives back a string or object that can be utilized to reply to the caller.

Create the function with the kubeless CLI:

  • The following function is used for creating a function with the kubeless CLI:

  • The below function screenshot specifies regarding the deployment function.

Let’s take a closer look at the command:

  1. hello: This is the name of the function we want to deploy.
  2. –runtime python3.4: This is the runtime we want to use to run our function. Run kubeless ‘get-server-config’ to see all the available options.
  3. –from-file hello.py: This is the file containing the function code. This can be a file or a zip file of up to 1MB of size.
  4. –handler function.hello: This specifies the file and the exposed function that will be used when receiving requests.
  • Yes, your first function is now deployed. You can check the functions created by using the command
kubeless function ls

  • Once the function is ready, you can call it by running:
kubeless function call hello --data 'Hey'

  • Now that your function has started, good to go. Next steps, what should I do? Now let’s use the HTTP Trigger to call the function.
  • For your function to be accessible to the public, you would require an Ingress controller.
  • Any ingress controller will work. For the sake of this essay, we’ll be using the Nginx Ingress controller.
  • Now let’s use Helm to install the Ingress controller.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
kubectl get pods -l app.kubernetes.io/name=ingress-nginx
  • You should now have an Ingress controller running in your Kubernetes cluster.
  • Let us now create an HTTP trigger using the kubeless command. If you observe my command precisely, I create an HTTP trigger event with the name hello-http-trigger and at the path env.
  • This means that we will be able to invoke the function by sending an HTTP request to the endpoint http://<ingress-ip>/env.
##Create a HTTP trigger
kubeless trigger http create hello-http-trigger --function-name hello --path env

##Get the IP of the Ingress resource
ip=$(kubectl get ing hello-http-trigger -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

##Get the hostname of the Ingress resource
host=$(kubectl get ing hello-http-trigger -o jsonpath='{.spec.rules[0].host}')

##Invoke the function by triggering a HTTP request
curl --data 'HOSTNAME' --header "Host: $host" --header "Content-Type:application/json" $ip/env;echo

Monitoring and Logging

  • Utilize Kubernetes tools and additional monitoring solutions to monitor the performance and logs of your serverless functions.

Cleanup

  • You can delete the function using the command below:
kubeless function delete hello
kubeless function ls

Serverless Kubernetes With Kubeless : Event-Driven Microservices

The concept is the same whether it is referred to as Serverless, Event-driven computing, or Functions As A Service (FaaS) dynamically assigns resources to run distinct functions, or microservices, that are triggered by events. Application developers can concentrate on the application rather than the underlying infrastructure and all of its maintenance aspects thanks to serverless computing platforms.

Although serverless platforms are offered by most cloud providers, you may create your own with just two materials. One is the container orchestration system Kubernetes, which has established itself as a common foundation for developing resilient, componentized systems. The second is any of several systems that Kubernetes uses to create serverless application patterns.

Table of Content

  • What is KEDA?
  • What is Knative?
  • What is Kubeless?
  • Kubernetes Components
  • How to Install Kubeless in your Kubernetes cluster?
  • How to Deploy your first Kubeless function?
  • Redesign Autoscaling infrastructure for Event-Driven Applications
  • Integrate KEDA with Knative
  • Understanding of Kubernetes Custom Metrics
  • Best Practices of Kubeless
  • Diffference Between Kubernetes, Keda and HPA
  • Difference Between Kubernetes and Openshift
  • Conclusion
  • Event Driven Computing Kubernetes – FAQs

Similar Reads

What is KEDA?

KEDA is a kubernetes-based event driven autoscaler. It helps in scaling the applications based on the events from various sources such as Messaging Queue, Databases etc.. It works through monitoring the event sources and adjust the number of kubernetes pods accordingly. On using the KEDA in kubernetes users can use various event sources such as Azure Queue, RabbitMQ, Prometheus metrics and many more. KEDA integrates seamlessly with kubernetes and can scale any container not just functions only....

What is Knative?

Knative is a kubernetes based platform that comes with components such as Knative Server and Knative Eventing to facilitate in deploy, manage and scaling of serverless applications. Knative Serving helps in deploying and running the serverless workloads and Knative Eventing helps in managing the event-driven architecture. It simplifies in building, deploying and managing of serverless applications on kubernetes....

What is Kubeless?

On top of it is an open-source serverless computing technology called Kubeless. Code can be deployed using Kubeless without requiring infrastructure management. Kubeless performs auto-scaling, routing, monitoring, and troubleshooting using Kubernetes resources. It is necessary to develop and implement routines that can be accessed through three distinct trigger methods....

Kubernetes Components

For this to be implemented you’ll need the:...

How to Install Kubeless in your Kubernetes cluster?

To get started with Kubeless, you need a running Kubernetes cluster. You can use a local cluster like Minikube or a cloud-based solution such as Google Kubernetes Engine (GKE) or Amazon EKS....

How to Deploy your first Kubeless function?

The following points guides you in deploying your first kubeless function. Before diving lets understand about kubeless function and Triggers:...

Redesign Autoscaling infrastructure for Event-Driven Applications

Redesigning of autoscaling infrastructure for the event driven applications helps in focus on the integrating the event driven mechanisms. This event driven mechanism dynamically responds to the workloads changes. On utilizing the tools like KEDA facilitates with efficient scaling based on the specific event triggers. It helps in ensuring the application with scaling up or down in real time as the event loads are fluctuating. The following are the some of the key points regarding the redesign autoscaling infrastructure for the event driven applications:...

Integrate KEDA with Knative

The integration of KEDA with Knative provides the enhanced scalability for the serverless applications by providing event-driven autoscaling. This integration improves the ability of KEDA to scale the kubernetes deployments based on the external events and Knative’s serverless platform. It provides a seamless solution for efficient management of workloads. The following are the some of the key insights on integration of KEDA with Knative:...

Understanding of Kubernetes Custom Metrics

In Kubernetes, custom metrics supports the users in defining and collecting specific performance data tailored to their applications’ needs. Unlike built-in metrics like CPU and memory usage, custom metrics are user-defined and can represent any aspect of application performance, such as request latency, queue length, or database connections. These metrics are typically exposed by applications through APIs or other endpoints and collected by monitoring systems like Prometheus. Kubernetes Horizontal Pod Autoscaler (HPA) can then utilize these custom metrics to dynamically adjust the number of pod replicas based on workload demands, enabling more efficient and fine-grained autoscaling. Custom metrics offer greater flexibility in scaling decisions, enabling Kubernetes to adapt more precisely to diverse application requirements and workload patterns....

Best Practices of Kubeless

The following are the best practices of Kubeless:...

Diffference Between Kubernetes, Keda and HPA

The following are the differences between kubernetes, keda and HPA:...

Difference Between Kubernetes and Openshift

The following are the differences between Kubernetes and Openshift:...

Conclusion

In conclusion, serverless Kubernetes with Kubeless offers a powerful and flexible platform for building event-driven microservices. It simplifies the process of deploying, scaling, and managing serverless functions by leveraging the capabilities of Kubernetes....

Event Driven Computing Kubernetes – FAQs

What is Serverless Architecture in Kubernetes?...

Contact Us