Saturday, January 6, 2024

SpringBoot Application Event Listeners

When a spring boot application starts few events occurs in below order
  • ApplicationStartingEvent
  • ApplicationEnvironmentPreparedEvent
  • ApplicationContextInitializedEvent
  • ApplicationPreparedEvent
  • ApplicationStartedEvent
  • ApplicationReadyEvent
These events can be listened by adding an application event listener to the spring boot application as below

  • Create a class implementing ApplicationListener<SpringApplicationEvent> interface
public class ApplicationEventListener implements ApplicationListener<SpringApplicationEvent> {
@Override
public void onApplicationEvent(SpringApplicationEvent event) {
log.info( " > " + event );
}
}
  • Register the above listener class to the spring boot application main class as
@SpringBootApplication
public class MyApplication {
public static void main(String[] args) {
SpringApplication app = new SpringApplication(MyApplication .class);
app.addListeners(new ApplicationEventListener());
app.run(args);
}
}
  • Start the application and all the above events can be seen in the console logs.

Tuesday, August 29, 2023

Functional Programming Explained

Rio: Hello Mr Archie, I have a question for you. I am not able to understand actually what is functional programming? I have always been reading about that while using functional programming we need to forget everything we know about programming. How can we forget about the variable, values, functions, datatypes, indentation, name spaces, hashes, collections, classes, inheritance etc. This statement is quite confusing. Could you please help me understand about the actual idea behind functional programming?

Mr Archie: Hey Rio, that indeed is a very good question. I would try to explain in an easy manner. The above statement should actually be corrected. The functional programming is all about refactoring everything you know about programming.

Let me try to explain what do I mean by both the above statement, Even in a very well designed applications with time, requirement changes or bug fixes need to be implemented/fixed which lead to many changes that leads to re-designing, adding new modules or removing few existing ones. All these factors make the existing design a mess. The subsequent bug fixing will be even harder in such kind of messed up design. There are lots of if else condition introduced many times. This is kind of very generic problem with most of the object oriented programming languages like I can give an example of type erasure process in java which is "Replace all parameters with their bound classes if the parameter is bounded or replace it with Object class if the parameter is unbounded". These kind of if else statements would make one finally feel that the code needs to be refactored by keeping the well designed modules as is  and re-organizing (re-factoring) the badly designed modules and introducing fewer new ones to keep the application outcome same as it was earlier. Now in order to bind the above constructs together in functional programming, we introduce functions as a bridge between them. These functions are written in such a manner that 
  • There are no side effects on the function being called, which means there should be no change in the state of input variables. This can be achieved e.g by using immutable input parameters. e.g say if an x = {3, 4, 5} is applied over a function y which is f(x), at the end of evaluation of y the value of x should still remain {3, 4, 5}. Now a question will arise in your mind that what if the parameters passed are something like array of hash-maps which are mutable. For this we actually always need to make sure that such data structures are immutable. Another question will now come into your mind that if everything is immutable how can we apply logic what needs to be performed. For this, we have to make sure that our data structure should be having an operation to return create a copy of itself which are mutable. To limit the number of copies while creating required mutable data structure, efficient persistent data structures were introduced. These are actually mutable and support copy on modification operation.
  • There are final desired effects produced as per the requirement of the application (enduser). That means we need to implement mutable state now into the application. This can be done by using bridges between the above function with (no side effect) on one side and the business logic on the other which will give the desired result. The function is thrown at the bridge which performs the desired logic. Examples of such bridges are Atomic classes, messaging queues etc which perform the task of producing desired final results.

Here we can think about programming functions just as mathematical function say f(x) which is an nothing but an abstract relationship between input set and output set. These will always give us same output for the same set of input. 

Rio: Why do we call them persistent data structure? It does not seem to be having any relation to database here.

Mr Archie: Yeh actually it was unfortunate by the designers of functional programming paradigm to use the term persistent here. Actually they are called efficient persistent data structures as they always store the previous version/copy of themselves when modified. Even if they are storing every copy while applying any modification,  they are written so efficiently that the minimum copies are created via using data structures like trees/graphs etc using branching factor of 32 which can support millions of elements. Only the node where modification is done is copied, the rest nodes are still pointing to the original data structure.

Rio: What is the biggest benefit of using functional programming?

Mr Archie:  Sure, few of the benefits achieved by using functional programming approach are as below
  • There are no assignments statements in functional programs, so the variables are immutable
  • We don't need to worry much about multithreading as the input data is immutable.
  • Since there are no side effects, so there is very less chance of introducing bugs.
  • Since the functions are referentially transparent, which means independent of each other, so the order of the program statements need not to be imperative.

Rio: And what are first class functions in programming language?

Mr Archie: A function that can take another function as a parameter or can even return a function as an output is called a first class function.

Rio: If I ask you specifically about how do we use functional programming in Java.

Mr Archie: The ways with which we use functional programming in java are:
  • Using Anonymous inner classes where we can create and instantiate a class at the same time. This is used if we want to use class declaration just once. 
  • Using Lambda Expression or anonymous functions which can accept parameters and return a value.
  • Using Method References.
Rio: The concept of Method reference was introduced in java 8, if I am not wrong. Can you please elaborate a bit how we can used method references in functional programming?
 
Mr Archie: Off-course, why not. Method references are nothing but a special type of lambda expression. They are called method references as they refer to an existing method. They can be used in 4 ways. I will explain all one by one

  • Referring static method
                    The String Utils class a static method capitalize(String string) which converts given string to all capital cases. This can be achieved as below
public class MethodReferenceForStaticMethods {
public static void main(String[] args) {
List<String> list = List.of("rio", "mr. achie");
list.stream().forEach(word -> {
StringUtils.toRootUpperCase(word);
});
}
}

           The same result can also be achieved using method references as below

public class MethodReferenceForStaticMethods {
public static void main(String[] args) {
List<String> list = List.of("rio", "mr. achie");
list.forEach(StringUtils::toRootUpperCase);
}
}
  • Referring instance method
                        If an instance method of a class is having same number of arguments as present in a functional interface, the instance method can be referred as below

public class MethodReferenceForInstanceMethods {
public static void main(String[] args) {
MethodReferenceForInstanceMethods instance =
new MethodReferenceForInstanceMethods();
MyFunctionalInterface methodReferenceForInstanceMethods =
instance::instanceMethod;
methodReferenceForInstanceMethods.say("a", "b", 5, 8);
}

public void instanceMethod(
String a, String b, int c, Object d
) {
System.out.println("Hello, this is non-static method by: "
+ a + " " + b + " " + c + " " + d);
}

interface MyFunctionalInterface {
void say(String a, String b, int d, Object e);
}
}

Rio: Thank you so much Mr Archie for making me understand the paradigm of functional programming in such a simple way. I will get back to you with more queries about it in coming days.

Mr Archie No issues, you can come over to me at any time. 




Sunday, August 27, 2023

Generics in Java

Rio:  Guys, today Mr Archie would be telling us about the concept of Generics in Java. I am sure this lecture is going to clear all your doubts around Generics. You may come up with your queries as we move along. You have the floor now, Mr Archie.

Mr Archie: Thank you, Rio. So guys, most of you must have already heard about the term Generics in Java.


Rio: Yes, Mr Archie.


Mr Archie: Good, so have you ever wondered where do we use Generics in Java?


Audience is silent.


Mr Archie: Generics help us in implementing generic code which can work with different data types. They help us in type checking of the java objects at the compile time rather than facing any casting related issues at run time. Let me try to explain this using an example here.

Suppose you want to create a class to store and print a string. so you would be doing something like this and you can create an object of this class and call print method as below.
class StringPrinter {
private String toPrint;
StringPrinter(String toPrint){
this.toPrint = toPrint;
}
public void print(){
System.out.println(toPrint);
}
}
public class Generics {
public static void main(String[] args) {
StringPrinter printer = new StringPrinter("Anshul");
printer.print();
}
}

Suppose later at some point of time you need to create another class which you should be able to use to print an integer. You can create another class almost like the above one

public class IntegerPrinter {
private Integer toPrint;
IntegerPrinter(Integer toPrint) {
this.toPrint = toPrint;
}
public void print() {
System.out.println(toPrint);
}
}
public class Generics {
public static void main(String[] args) {
StringPrinter stringPrinter = new StringPrinter("Anshul");
stringPrinter.print();
IntegerPrinter integerPrinter = new IntegerPrinter(12);
integerPrinter.print();
}
}

If you take a look carefully, the code is almost duplicate for the two classes viz. StringPrinter and IntegerPrinter.


This duplicity can easily be removed using Generics. We can create a parameterised class as below

public class Printer<T> {
T toPrint;
Printer(T toPrint){
this.toPrint = toPrint;
}
public void print(){
System.out.println(toPrint);
}
}
public class Generics {
public static void main(String[] args) {
Printer<String> stringPrinter = new Printer<>("Anshul");
stringPrinter.print();
Printer<Integer> integerPrinter = new Printer<>(12);
integerPrinter.print();
} 

Jamie (among audience): Wao thats cool!.

Mr Archie: Yes its, indeed, a cool feature provided by Java. Not only classes we can also create generic methods using Generics.
public class Printer<T> {
T toPrint;
Printer(T toPrint){
this.toPrint = toPrint;
}
public void print(){
System.out.println(toPrint);
}
public T get(){
return toPrint;
}
}

If you would notice carefully you would see that the method get is returning a generic Type and we can call this method to get the respective datatypes

public class Generics {
public static void main(String[] args) {
Printer<String> stringPrinter = new Printer<>("Anshul");
String str = stringPrinter.get();
System.out.println(str);
Printer<Integer> integerPrinter = new Printer<>(12);
Integer integer = integerPrinter.get();
System.out.println(integer);
}
}
Rio: Can we use primitive data types in generics?
Mr Archie: Generics work with Reference Types only. You can not use primitive data types.

Rio: Thanks!

Jamie: Mr Archie, I had once heard about bounded and Unbounded Generic Types. Can you please explain those concepts, as well, here?

Mr Archie: Yes Jamie, I was about to explain those. In the above example the Type parameter can take any argument. So this will be called an unbounded generic type. In all we can categorise generics into 2 different types
  • Unbounded Generic Types: The unbounded generic types can take any known reference type <T> or a wildcard <?> if the type is unknown. 
  • Bounded Generic Types: In case, for known reference types, if you want to create a boundary of  the types that will be  accepted by the generic class/method. we can use bounded generic types.
Rio: So you mean, if we want to restrict the types that will be accepted by generic class/methods we can do that? So how is that possible?

Mr Archie: Let me explain you in better way. Till this time, all the examples, I gave above, were using unbounded or simple generic type where in we were not limiting the types that could be used with the Generic class. We could use any reference type with the above generic class e.g. String, Integer, Animal, Airplane, Student, any. There may come scenarios where you might need to restrict your generic class to accept only limited reference types. Bounded Generic Types help us to achieve this requirement. They are further of 2 types
  • Upper Bounded
  • Lower Bounded
Jamie: Examples Mr Jamie, please?

Mr Archie: Suppose you want to make your generic class accept only type T and its subtypes, you will go for Upper Bounded Generic Type. Let me explain it using the above example only.

The generic class Printer above, can accept any reference type, even say an Animal class, if exists.
public class Animal {
private String name;
Animal(String name) {
this.name = name;
}
@Override
public String toString() {
return "Animal{" +
"name='" + name + '\'' +
'}';
}
}
public class Generics {
public static void main(String[] args) {
Printer<String> stringPrinter = new Printer<>("Anshul");
String str = stringPrinter.get();
System.out.println(str);
Printer<Integer> integerPrinter = new Printer<>(12);
Integer integer = integerPrinter.get();
System.out.println(integer);
Printer<Animal> aPrinter = new Printer<>(new Animal("Cat"));
System.out.println(aPrinter);
}

}

If we want this Printer class to accept Number or its subtypes only, we can use Upper Bounded Generic via extends keyword as below
public class Printer<T extends Number> {
T toPrint;
Printer(T toPrint){
this.toPrint = toPrint;
}
public void print(){
System.out.println(toPrint);
}
public T get(){
return toPrint;
}

public String toString(){
return toPrint.toString();
}
After doing so, the Printer class would not be able to accept either String or Animal type, and you will get compile time error saying "Type parameter is not within bounded" while trying to do so.

and similarly if we want this Printer class to accept Animals or its subtypes only, we can change it to
public class Printer<T extends Animal> {
T toPrint;
Printer(T toPrint){
this.toPrint = toPrint;
}
public void print(){
System.out.println(toPrint);
}
public T get(){
return toPrint;
}
public String toString(){
return toPrint.toString();
}
}
Lower Bounded Generic Type parameter is, however, not supported at class level but rather at method level only. This is because wildcard ? can not be used at class level. It makes no sense using it at the class level. Let me try to explain the above statements.

If we could have, anyhow, been able to create a class Printer with ? type parameter, we would have been in, no way, able to determine the type to be used for declaring instance variable for that or using them inside any method. With classes you need some identifier unlike that with methods where you don't. This is quite confusing for now. I will try to cover this topic in some other lecture. For now just remember that we can not use super keyword in type parameter while declaring a generic class.
Rio: Sure, no worries. We will take a note of this.

Mr Archie: Now for using super keyword in case of lower bounded generics, let me try to give an example here. Suppose we have two different lists, one of Number type and another of integer type. We want to create a generic method that works on both of them, if we try to do this way
public class Generics {
public static void main(String[] args) {
List<Number> listNumber =Arrays.asList(1, 2);
List<Integer> listInteger =Arrays.asList(1, 2);
List<Object> listObject =Arrays.asList(1, 2);
print(listNumber);
print(listInteger);
print(listObject);
}
private static void print(List<Integer> listNumber) {

}
private static void print(List<Number> listNumber) {

}
private static void print(List<Object> listNumber) {

}
}
We will get a compile time error saying "both methods have same erasure" as Integer already extends Number class in java. How can we write generic method that supports both Integer and its super classes. We can in this case use lower bounded generics. This required generic method can be written as
public class Generics {
public static void main(String[] args) {
List<Number> listNumber =Arrays.asList(1, 2);
List<Integer> listInteger =Arrays.asList(1, 2);
List<Object> listObject =Arrays.asList(1, 2);
print(listNumber);
print(listInteger);
print(listObject);
}
private static void print(List<? super Integer> listNumber) {
listNumber.stream().forEach(System.out::println);
}
}
Rio: Mr Archie, so how can we determine where we should use upper bound generics and where we should go for lower bound generics?

Mr Archie: A very simple get-put rule if we can to get something out, go for Upper Bound and if we want to set something in, go for Lower Bound.

One more topic I think can be covered here today, is about variance.

Rio: Oh still more left in there. Please go ahead, quite interesting.

Mr Archie: Variance is the assignment compatibility between generic classes and methods.

Lets take an example, we have an array of animals and a list of animals. We have two methods one with Object[] as parameter and another with List<Object> as parameter.
public class Generics {
public static void main(String[] args) {
Animal[] arrayAnimal = new Animal[10];
arrayAnimal[0] = new Animal("monkey");
arrayAnimal[1] = new Cat("cat");
arrayAnimal[2] = new Dog("dog");
List<Animal> listAnimal = new ArrayList<>();
listAnimal.add(new Animal("monkey"));
listAnimal.add(new Cat("cat"));
listAnimal.add(new Dog("dog"));
}
private static void doStuff(Object[] objects) {
System.out.println(objects);
}
private static void doStuff(List<Object> objects) {
System.out.println(objects);
}
}
If we try to pass the arrayAnimal to method accepting Object[] this is acceptable but if we try to pass listAnimal to method accepting List<Object>, we get compile time error. This is because arrays of animal is subtype of array of object but list of animal is not a subtype of list of object. This is called invariant method.

To fix this issue, we can change the method to use wildcard ? to allow list of any type and this will make the method bi-variant.
public class Generics {
public static void main(String[] args) {
List<Animal> listAnimal = new ArrayList<>();
listAnimal.add(new Animal("monkey"));
listAnimal.add(new Cat("cat"));
listAnimal.add(new Dog("dog"));
doStuff(listAnimal);
}
private static void doStuff(List<?> objects) {
System.out.println(objects);
}
}
Another method to fix this issue, we can change the method a bit to allow objects of  type T (Animal in this case) and objects of subtypes of T to make it covariant. This is nothing but using concept of upper bound generics.
public class Generics {
public static void main(String[] args) {
Animal[] arrayAnimal = new Animal[10];
arrayAnimal[0] = new Animal("monkey");
arrayAnimal[1] = new Cat("cat");
arrayAnimal[2] = new Dog("dog");
List<Animal> listAnimal = new ArrayList<>();
listAnimal.add(new Animal("monkey"));
listAnimal.add(new Cat("cat"));
listAnimal.add(new Dog("dog"));
doStuff(arrayAnimal);
doStuff(listAnimal);
}
private static void doStuff(Object[] objects) {
System.out.println(objects);
}
private static void doStuff(List<? extends Animal> objects) {
System.out.println(objects);
}
}
Similarly for understanding contra-variance, lets take an example of scenario used above
public class Generics {
public static void main(String[] args) {
List<Number> listNumber =Arrays.asList(1, 2);
List<Integer> listInteger =Arrays.asList(1, 2);
List<Object> listObject =Arrays.asList(1, 2);
print(listNumber);
print(listInteger);
print(listObject);
}
private static void print(List<? super Integer> listNumber) {
listNumber.stream().forEach(System.out::println);
}
Here the print method has bound the input parameters to be of type T (Integer in this case) and its superclasses viz. Number as well as Object to make it contra-variant.

Rio: Wao, very nicely explained!

Mr Archie: One more thing that just came into my mind is Type Erasure Process. This is in-fact the process by which a compiler replaces all the generic parameters in generic class with actual ones. Compiler follows these rules in this process
  • For bounded type parameters, bounded types are inserted.
  • For unbounded type parameters, Object class is inserted.
  • To preserve type safety, type casts are introduced t
  • To preserve polymorphism in extended generic type classes, bridge methods are generated.
Thats all for today guys. We will meet soon again to discuss some other interesting topic. Thank you everyone.

Rio: Thanks, Mr Archie for this wonderful session today. This indeed helped us in understanding the concept of Generics in Java. See you soon in another session.


Thursday, June 22, 2023

Few un-common things to know about Docker

How to push your local images to remote docker hub?

    • Create an account on docker hub via URL https://hub.docker.com/signup
    • Login to the account using the credentials
    • Create a repository from the webpage where you want to push your images to.
    • Create the docker image on your local using command
      • docker build -t {image-name} .
    • Login to docker cli using command
      • docker login
    • Enter the docker hub id and password
    • Create a tag for your local docker image using command
      • docker tag {local-image-name}:{local-tag-version}{remote-repository-name}/{remote-image-name}:{remote-tag-version} 
    • Push your tag to the repository using command
      • docker push {remote-repository-name}/{remote-image-name}:{remote-tag-version} 
    • Now you can use the docker hub image using command
      • docker run -d -p 8080:8080 {remote-repository-name}/{remote-image-name}:{remote-tag-version}

Tuesday, June 13, 2023

How to find the account id from AWS console?

  1.  Sign into the AWS Console at the url https://signin.aws.amazon.com/signin using root credentials.
  2. Click on the user name at the top right corner.
  3. Click "Security Credentials" from the drop down menu.
  4. The account id is displayed on the page.

Thursday, May 25, 2023

mvn clean install is giving error "PKIX path building failed. unable to find valid certification path"

Issue: When trying to execute "maven clean install", I am getting an error message saying "PKIX path building failed. unable to find valid certification path"

Reasons:

  • Certificates to the nexus repository are not imported to JDK trust store. To do this follow the below steps
    • Open the nexus url in chrome browser
    • Click on lock button in the address bar to the left of the url.
    • Click "Connection is secure". 
    • Click "Certificate is valid".
    • Go to tab "Details"
    • Export the certificate to some local path say "/Users/asood/Downloads/www.amazon.in.cer" 
    • Find the installation path of the jdk 
    • Navigate to lib/security directory
    • Import the above certificate into cacerts truststore using command
      • keytool -import -alias myalias -file /Users/asood/Downloads/www.amazon.in.cer -keystore cacerts
    • Try mvn clean install again and it should be ok now
  • The jdk used by maven is different from the default jdk home set for the machine. To check this follow the below steps
    • Run command 
      • mvn --version
    • Confirm if the jdk path is same as that being used default jdk home.
    • If not, there are 2 options
      1. Change the path of java used by maven to be same as the default jdk home
      2. Add the certificates to the lib/security/cacerts by following the steps mentioned above 


Tuesday, May 2, 2023

Understanding Completable Futures in JAVA?

Runnable

Runnable interface was introduced in JDK 1.0 to execute a block of code in a separate thread to achieve multi threading in java. It is present inside java.lang package. It is a functional interface and has a single method run() which returns void that means nothing.

Callable 

Callable interface was introduced in JDK 5 to return a response back form an executing thread. It is present inside java.util.concurrent package.  It is also a functional interface and has a single method call() which returns an object returned by the method.

  • Example to get Future object using callable interface via FutureTask
 
public class TestCallable {
    public static void main(String[] args) throws Exception {
MyCallable myCallable = new MyCallable();
FutureTask<Integer> futureTask = new FutureTask(myCallable);
Thread thread = new Thread(futureTask);
thread.start();
int i = futureTask.get();
System.out.println(i);
}
}

class MyCallable implements Callable{
@Override
public Integer call() throws Exception {
return 105;
}
}
  • Example to get Future object using callable interface via Executors framework
public class TestCallable {
public static void main(String[] args) throws ExecutionException, InterruptedException {
Callable<String> callable = () -> {return "Return some result";};
ExecutorService executorService = Executors.newSingleThreadExecutor();
Future<String> future = executorService.submit(callable);
String s = future.get();
System.out.println(s);
executorService.shutdown();
}
}


Different Terms used in AWS

  • ECC (EC2): Elastic Compute Cloud Service is used to run computer applications on a virtual machine in AWS.
  • ECS: Elastic Container Service is used to deploy and manager containerized applications in an AWS environment
  • ECR: Elastic Container Registry Service is used to store the container images
  • Cloud Formation Service is used to define and provision infrastructure resources using json or yaml formatted infrastructure as Code Template 
  • IAC: Infrastructure as code
  • Security Group: It is a virtual firewall for EC2 or ECS instances which control incoming and outgoing traffic. Security groups are stateful, which means that if an inbound request passes, then the outbound request will pass as well.
  • NACL (Network Access Control List) is used to control the traffic in and out of the on or more subnets.
  • FargateIts is a serverless computing engine which eliminates the need for end-users to manage the servers that host containers. A user needs to package the application in containers, specify the Operating System, CPU, and memory requirements, configure networking and IAM policies. Servers are provisioned automatically by Fargate using the above specifications provided by user. 
  • NLB: Network Load Balancer is among one of the four types of Elastic Load Balancers.
  • ELB: Elastic Load Balancer distributes the incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. They are of 4 types
    • Application Load Balancers
    • Network Load Balancers
    • Gateway Load Balancers
    • Classic Load Balancers
  • EMR (Elastic Map Reduce) makes it simple and cost effective to run highly distributed processing frameworks such as Hadoop, Spark, and Presto when compared to on-premises deployments. 
  • Athena helps to analyze unstructured, semi-structured, and structured data stored in Amazon S3.

  • Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. It is used to pull together data from many different sources like inventory systems, financial systems, and retail sales systems into a common format,

  • DynamoDB or Dynamo Database or DDB is a fully managed NoSQL database service provided by Amazon Web Services.

  • Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development.

  • Data lakes accept unstructured data while Data Warehouses only accept structured data from multiple sources.
  • CodeCommit is a managed source code control service provided by AWS Cloud

General Computer Programming Concepts

 What is difference between imperative and declarative programming?

  • Imperative Programming is the programming technique where we define exact steps to reach an end result.
  • Declarative Programming is the programming technique where we define what end result we want.
What is AOT and JIT compilation and what are its advantages and disadvantages?
  • AOT Compilation refers to Ahead Of Time compilation and occurs during the build phase 
  • JIT Compilation refers to Just in Time compilations and occurs during run phase
  • Advantages of AOT
    • The start-up time for the application becomes very less via this approach of compilation but yeah build time is comparatively more
    • Memory footprint of the application becomes very low as JIT compiler and related components are excluded.
    • JVM is not needed as it creates standalone executables
    • The code is platform independent
  • Disadvantages of AOT
    • The performance is less as compared to JIT as in JIT code is optimized dynamically during run time

Micronaut Tutorials

How to start with micronaut?

  1. Install sdkman on the system. The installation can be verified using command sdk --version
  2. Update sdkman using command sdk update
  3. Install micronaut using command sdk install micronaut 3.9.1
  4. Launch micronaut cli using command mn
  5. Create project using command create-app com.abc.micronaut.micronautguide --build=gradle_kotlin --lang=java
  6. Run project using command ./gradlew run

How to create a micronaut project with gradle kotlin DSL and java using command line?

mn create-app com.abc.micronaut.micronautguide --build=gradle_kotlin --lang=java


How to run a micronaut project with gradle kotlin DSL and java using command line?

./gradlew run








Basics of Terraform

Terraform is an IAC (Infrastructure as code) tool that helps to automate provisioning, configuring and managing the application infrastructure, platform and services. 

  • It resembles ansible in a major way but ansible is more likely a configuration tool on an existing infrastructure
  • We can easily make any changes to existing infrastructure using Terraform.
  • We can easily replicate an existing infrastructure using Terraform.
Terraform has two components
  • Terraform Core 
    • Terraform Input
    • Terraform State
  • Terraform Providers
    • IAAS (Cloud) Providers (AWS)
    • PaaS Providers (Kubenetes)
    • Service Providers (Fastly)

Terraform core components is used to create plan while the provider components is used to execute that plan.

Terraform code is written in a language called HCL i.e. Hashicorp Configuration Language. The code is saved in a file with extension .tf. It can create infrastructure across variety of providers like AWS, GCP, Azure, Digital Ocean etc.

Terraform Commands

  • Refresh: 
    • Gets the current state using the provider component
  • Plan: 
    • Creates an execution plan using the core component
  • Apply
    • Executes the plan
  • Destroy
    • Removes the infrastructure
Pre-requisites    
  • AWS CLI
  • Terraform
  • AWS CLI configured for AWS account to be used. See 
Install terraform
  • choco install terraform (via Windows Powershell)
  • brew install terraform (via Mac terminal)
  • Run below command to verify installation
    • terraform --version
Terraform plugins
  • These are executable binaries written in Go language that communicate with Terraform Core over an RPC interface. e.g. aws provider is a plugin
Terraform  modules
  • A module is a container for multiple resources that are used together.
  • A terraform configuration has at least one module, known as its root module, which consists of the resources defined in the .tf files in the main working directory.

Terraform providers
  • A provider adds a set of resource types and or data sources that Terraform can manage. 
  • They are available in terraform registry at url https://registry.terraform.io/browse/providers?product_intent=terraform
  • They are constrained in configuration called provider requirements in production environments
# Provider requirements are defined in this block
terraform {
# Declare the required version using Version Constraint Syntax
required_version = ">= 1.0"
# Declare the required providers needed by the module
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.50.0, < 5.0.0"
}
}
}

Terraform Variables 
  • Input
    • Input variables let you customise aspects of Terraform modules without altering the module's own source code.
    • To declare variables in the root module of the configuration, we can set their values using CLI options and environment variables.
    • To declare variables in child modules, the calling module should pass values in the module block.
    • An input variable in terraform can be defined as
                    variable "variable_name"{
      default = "value",
      description="Stores the value for variable_name",
      type="string/number/bool/list",
      validation{
      condition = length(var.image_id) > 4 && substr(var.image_id, 0, 4) == "ami-"
      error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
      }
                }


Sample Terraform code
  • To define the provider and the region to be used for provisioning infrastructure, you can create a file with name main.tf and add below content
            provider "aws"{
                region = "ap-south-1"
                  }
      • To create a resource such as instance, database, load balancer etc, you can add content in below syntax
                  resource "<PROVIDER>_<RESOURCE_TYPE>" "<RESOURCE_NAME>"{
                    [CONFIG ...]
                      }

                      e.g.

                      resource "aws_instance" "testing"{
                        ami  = ""
                                instance_type="t2.micro"
                          }
              • To execute terraform code
                • Go to the directory, where the main.tf is saved, via terminal
                • Run command
                  • terraform init
                • The above command will initialize backend and the requested provider plugins inside a folder called .terraform
                • Run command
                  • terraform plan -out "myplan.txt"
                • The above command will show what terraform will actually do. It is a kind of sanity testing. The plan will be saved to file myplan.txt
                • Run command
                  • terraform apply "myplan.txt"
                • The above command will create the resource
                • Run command 
                  • terraform destroy
                • The above command will delete all the resources

              SpringBoot Application Event Listeners

              When a spring boot application starts few events occurs in below order ApplicationStartingEvent ApplicationEnvironmentPreparedEvent Applicat...