Thursday, May 25, 2023

mvn clean install is giving error "PKIX path building failed. unable to find valid certification path"

Issue: When trying to execute "maven clean install", I am getting an error message saying "PKIX path building failed. unable to find valid certification path"

Reasons:

  • Certificates to the nexus repository are not imported to JDK trust store. To do this follow the below steps
    • Open the nexus url in chrome browser
    • Click on lock button in the address bar to the left of the url.
    • Click "Connection is secure". 
    • Click "Certificate is valid".
    • Go to tab "Details"
    • Export the certificate to some local path say "/Users/asood/Downloads/www.amazon.in.cer" 
    • Find the installation path of the jdk 
    • Navigate to lib/security directory
    • Import the above certificate into cacerts truststore using command
      • keytool -import -alias myalias -file /Users/asood/Downloads/www.amazon.in.cer -keystore cacerts
    • Try mvn clean install again and it should be ok now
  • The jdk used by maven is different from the default jdk home set for the machine. To check this follow the below steps
    • Run command 
      • mvn --version
    • Confirm if the jdk path is same as that being used default jdk home.
    • If not, there are 2 options
      1. Change the path of java used by maven to be same as the default jdk home
      2. Add the certificates to the lib/security/cacerts by following the steps mentioned above 


Tuesday, May 2, 2023

Understanding Completable Futures in JAVA?

Runnable

Runnable interface was introduced in JDK 1.0 to execute a block of code in a separate thread to achieve multi threading in java. It is present inside java.lang package. It is a functional interface and has a single method run() which returns void that means nothing.

Callable 

Callable interface was introduced in JDK 5 to return a response back form an executing thread. It is present inside java.util.concurrent package.  It is also a functional interface and has a single method call() which returns an object returned by the method.

  • Example to get Future object using callable interface via FutureTask
 
public class TestCallable {
    public static void main(String[] args) throws Exception {
MyCallable myCallable = new MyCallable();
FutureTask<Integer> futureTask = new FutureTask(myCallable);
Thread thread = new Thread(futureTask);
thread.start();
int i = futureTask.get();
System.out.println(i);
}
}

class MyCallable implements Callable{
@Override
public Integer call() throws Exception {
return 105;
}
}
  • Example to get Future object using callable interface via Executors framework
public class TestCallable {
public static void main(String[] args) throws ExecutionException, InterruptedException {
Callable<String> callable = () -> {return "Return some result";};
ExecutorService executorService = Executors.newSingleThreadExecutor();
Future<String> future = executorService.submit(callable);
String s = future.get();
System.out.println(s);
executorService.shutdown();
}
}


Different Terms used in AWS

  • ECC (EC2): Elastic Compute Cloud Service is used to run computer applications on a virtual machine in AWS.
  • ECS: Elastic Container Service is used to deploy and manager containerized applications in an AWS environment
  • ECR: Elastic Container Registry Service is used to store the container images
  • Cloud Formation Service is used to define and provision infrastructure resources using json or yaml formatted infrastructure as Code Template 
  • IAC: Infrastructure as code
  • Security Group: It is a virtual firewall for EC2 or ECS instances which control incoming and outgoing traffic. Security groups are stateful, which means that if an inbound request passes, then the outbound request will pass as well.
  • NACL (Network Access Control List) is used to control the traffic in and out of the on or more subnets.
  • FargateIts is a serverless computing engine which eliminates the need for end-users to manage the servers that host containers. A user needs to package the application in containers, specify the Operating System, CPU, and memory requirements, configure networking and IAM policies. Servers are provisioned automatically by Fargate using the above specifications provided by user. 
  • NLB: Network Load Balancer is among one of the four types of Elastic Load Balancers.
  • ELB: Elastic Load Balancer distributes the incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. They are of 4 types
    • Application Load Balancers
    • Network Load Balancers
    • Gateway Load Balancers
    • Classic Load Balancers
  • EMR (Elastic Map Reduce) makes it simple and cost effective to run highly distributed processing frameworks such as Hadoop, Spark, and Presto when compared to on-premises deployments. 
  • Athena helps to analyze unstructured, semi-structured, and structured data stored in Amazon S3.

  • Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. It is used to pull together data from many different sources like inventory systems, financial systems, and retail sales systems into a common format,

  • DynamoDB or Dynamo Database or DDB is a fully managed NoSQL database service provided by Amazon Web Services.

  • Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development.

  • Data lakes accept unstructured data while Data Warehouses only accept structured data from multiple sources.
  • CodeCommit is a managed source code control service provided by AWS Cloud

General Computer Programming Concepts

 What is difference between imperative and declarative programming?

  • Imperative Programming is the programming technique where we define exact steps to reach an end result.
  • Declarative Programming is the programming technique where we define what end result we want.
What is AOT and JIT compilation and what are its advantages and disadvantages?
  • AOT Compilation refers to Ahead Of Time compilation and occurs during the build phase 
  • JIT Compilation refers to Just in Time compilations and occurs during run phase
  • Advantages of AOT
    • The start-up time for the application becomes very less via this approach of compilation but yeah build time is comparatively more
    • Memory footprint of the application becomes very low as JIT compiler and related components are excluded.
    • JVM is not needed as it creates standalone executables
    • The code is platform independent
  • Disadvantages of AOT
    • The performance is less as compared to JIT as in JIT code is optimized dynamically during run time

Micronaut Tutorials

How to start with micronaut?

  1. Install sdkman on the system. The installation can be verified using command sdk --version
  2. Update sdkman using command sdk update
  3. Install micronaut using command sdk install micronaut 3.9.1
  4. Launch micronaut cli using command mn
  5. Create project using command create-app com.abc.micronaut.micronautguide --build=gradle_kotlin --lang=java
  6. Run project using command ./gradlew run

How to create a micronaut project with gradle kotlin DSL and java using command line?

mn create-app com.abc.micronaut.micronautguide --build=gradle_kotlin --lang=java


How to run a micronaut project with gradle kotlin DSL and java using command line?

./gradlew run








Basics of Terraform

Terraform is an IAC (Infrastructure as code) tool that helps to automate provisioning, configuring and managing the application infrastructure, platform and services. 

  • It resembles ansible in a major way but ansible is more likely a configuration tool on an existing infrastructure
  • We can easily make any changes to existing infrastructure using Terraform.
  • We can easily replicate an existing infrastructure using Terraform.
Terraform has two components
  • Terraform Core 
    • Terraform Input
    • Terraform State
  • Terraform Providers
    • IAAS (Cloud) Providers (AWS)
    • PaaS Providers (Kubenetes)
    • Service Providers (Fastly)

Terraform core components is used to create plan while the provider components is used to execute that plan.

Terraform code is written in a language called HCL i.e. Hashicorp Configuration Language. The code is saved in a file with extension .tf. It can create infrastructure across variety of providers like AWS, GCP, Azure, Digital Ocean etc.

Terraform Commands

  • Refresh: 
    • Gets the current state using the provider component
  • Plan: 
    • Creates an execution plan using the core component
  • Apply
    • Executes the plan
  • Destroy
    • Removes the infrastructure
Pre-requisites    
  • AWS CLI
  • Terraform
  • AWS CLI configured for AWS account to be used. See 
Install terraform
  • choco install terraform (via Windows Powershell)
  • brew install terraform (via Mac terminal)
  • Run below command to verify installation
    • terraform --version
Terraform plugins
  • These are executable binaries written in Go language that communicate with Terraform Core over an RPC interface. e.g. aws provider is a plugin
Terraform  modules
  • A module is a container for multiple resources that are used together.
  • A terraform configuration has at least one module, known as its root module, which consists of the resources defined in the .tf files in the main working directory.

Terraform providers
  • A provider adds a set of resource types and or data sources that Terraform can manage. 
  • They are available in terraform registry at url https://registry.terraform.io/browse/providers?product_intent=terraform
  • They are constrained in configuration called provider requirements in production environments
# Provider requirements are defined in this block
terraform {
# Declare the required version using Version Constraint Syntax
required_version = ">= 1.0"
# Declare the required providers needed by the module
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.50.0, < 5.0.0"
}
}
}

Terraform Variables 
  • Input
    • Input variables let you customise aspects of Terraform modules without altering the module's own source code.
    • To declare variables in the root module of the configuration, we can set their values using CLI options and environment variables.
    • To declare variables in child modules, the calling module should pass values in the module block.
    • An input variable in terraform can be defined as
                    variable "variable_name"{
      default = "value",
      description="Stores the value for variable_name",
      type="string/number/bool/list",
      validation{
      condition = length(var.image_id) > 4 && substr(var.image_id, 0, 4) == "ami-"
      error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
      }
                }


Sample Terraform code
  • To define the provider and the region to be used for provisioning infrastructure, you can create a file with name main.tf and add below content
            provider "aws"{
                region = "ap-south-1"
                  }
      • To create a resource such as instance, database, load balancer etc, you can add content in below syntax
                  resource "<PROVIDER>_<RESOURCE_TYPE>" "<RESOURCE_NAME>"{
                    [CONFIG ...]
                      }

                      e.g.

                      resource "aws_instance" "testing"{
                        ami  = ""
                                instance_type="t2.micro"
                          }
              • To execute terraform code
                • Go to the directory, where the main.tf is saved, via terminal
                • Run command
                  • terraform init
                • The above command will initialize backend and the requested provider plugins inside a folder called .terraform
                • Run command
                  • terraform plan -out "myplan.txt"
                • The above command will show what terraform will actually do. It is a kind of sanity testing. The plan will be saved to file myplan.txt
                • Run command
                  • terraform apply "myplan.txt"
                • The above command will create the resource
                • Run command 
                  • terraform destroy
                • The above command will delete all the resources

              AWS CLI Commands

              We can use commands to perform various operations on an AWS account via AWS CLI (Command Line Interface).  The data for each page on the AWS console can be obtained via a corresponding cli command

              Pre-requisites to use AWS CLI Commands:
              • Make sure AWS CLI is installed on the system
              • Make sure that AWS is configured properly via "aws configure" command            
              aws configure set aws_access_key_id {ACCESS_KEY_ID}
              aws configure set aws_secret_access_key {SECRET_ACCESS_KEY}
              aws configure set aws_session_token {SESSION_TOKEN}

              Note* To generate access key id and secret access key, please read the blog https://anshulsood2006.blogspot.com/2023/04/generating-access-key-id-and-secret.html

              Note* To find the account id corresponding to your AWS account, please read the blog https://anshulsood2006.blogspot.com/2023/06/how-to-find-account-id-from-aws-console.html
              • Find the list of all the clusters available
                • aws ecs list-clusters
              • Find list of all ECS services in a given cluster
                • aws ecs list-services --cluster {CLUSTER_NAME}
              • Find the list of all tasks in a given cluster
                • aws ecs list-tasks --cluster {CLUSTER_NAME}

               

              Amazon ECS

              Amazon ECS is a fully managed container orchestration service that helps you to more efficiently deploy, manage, and scale containerized applications.

              Steps to deploy an application to ECS
              • Create a container images
              • Store the image in ECR (Elastic Container Registry)
              • Create a Task Definition which contains settings like exposed ports, docker image, cpu shares, memory requirements, commands to run and environment variables.
              • Create an instance of a task definition which will be called a Task. Long running tasks from the same task definition is called a service
              • A logical group of ECS services is called a cluster

              AWS Fargate

              AWS Fargate 

              AWS Fargate is a serverless computing engine which eliminates the need for end-users to manage the servers that host containers. A user needs to package the application in containers, specify the Operating System, CPU, and memory requirements, configure networking and IAM policies. Servers are provisioned automatically by Fargate using the above specifications provided by user. 

              It has 3 main components

              • Cluster is a logical group of tasks or services is an Amazon ECS.
              • Task Definition is a text file that describes the application containers.
              • Task: A running instance of a task definition
              • Services: One or more tasks 

              SpringBoot Application Event Listeners

              When a spring boot application starts few events occurs in below order ApplicationStartingEvent ApplicationEnvironmentPreparedEvent Applicat...