2020-04-29

Hibernate JPA @Where Example

@Where

Sometimes, you want to filter out entities or collections using custom SQL criteria. This can be achieved using the @Where annotation, which can be applied to entities and collections.

@Where mapping usage
public enum AccountType {
DEBIT,
CREDIT
}

@Entity(name = "Client")
public static class Client {

@Id
private Long id;

private String name;

@Where( clause = "account_type = 'DEBIT'")
@OneToMany(mappedBy = "client")
private List<Account> debitAccounts = new ArrayList<>( );

@Where( clause = "account_type = 'CREDIT'")
@OneToMany(mappedBy = "client")
private List<Account> creditAccounts = new ArrayList<>( );

//Getters and setters omitted for brevity

}

@Entity(name = "Account")
@Where( clause = "active = true" )
public static class Account {

@Id
private Long id;

@ManyToOne
private Client client;

@Column(name = "account_type")
@Enumerated(EnumType.STRING)
private AccountType type;

private Double amount;

private Double rate;

private boolean active;

//Getters and setters omitted for brevity

}

Hibernate HPA @Columns Example

@Columns

The @Columns annotation is used to group multiple JPA @Column annotations.

Columns Support an array of columns. Useful for component user types mappings.

Hibernate JPA @ColumnDefault Example

@ColumnDefault

The @ColumnDefault annotation is used to specify the DEFAULT DDL value to apply when using the automated schema generator.
ColumnDefault Identifies the DEFAULT value to apply to the associated column via DDL.

The same behavior can be achieved using the definition attribute of the JPA @Column annotation.

Hibernate JPA @CollectionType Example

@CollectionType

The @CollectionType annotation is used to specify a custom collection type.

The collection can also name a @Type, which defines the Hibernate Type of the collection elements.

CollectionType Names a custom collection type for a persistent collection. The collection can also name a @Type, which defines the Hibernate Type of the collection elements.

Custom collection mapping example

@Entity(name = "Person")
public static class Person {

@Id
private Long id;

@OneToMany(cascade = CascadeType.ALL)
@CollectionType( type = "org.hibernate.userguide.collections.type.QueueType")
private Collection<Phone> phones = new LinkedList<>();

//Constructors are omitted for brevity

public Queue<Phone> getPhones() {
return (Queue<Phone>) phones;
}
}

@Entity(name = "Phone")
public static class Phone implements Comparable<Phone> {

@Id
private Long id;

private String type;

@NaturalId
@Column(name = "`number`")
private String number;

//Getters and setters are omitted for brevity

@Override
public int compareTo(Phone o) {
return number.compareTo( o.getNumber() );
}

@Override
public boolean equals(Object o) {
if ( this == o ) {
return true;
}
if ( o == null || getClass() != o.getClass() ) {
return false;
}
Phone phone = (Phone) o;
return Objects.equals( number, phone.number );
}

@Override
public int hashCode() {
return Objects.hash( number );
}
}

public class QueueType implements UserCollectionType {

    @Override
    public PersistentCollection instantiate(
            SharedSessionContractImplementor session,
            CollectionPersister persister) throws HibernateException {
        return new PersistentQueue( session );
    }

    @Override
    public PersistentCollection wrap(
            SharedSessionContractImplementor session,
            Object collection) {
        return new PersistentQueue( session, (List) collection );
    }

    @Override
    public Iterator getElementsIterator(Object collection) {
        return ( (Queue) collection ).iterator();
    }

    @Override
    public boolean contains(Object collection, Object entity) {
        return ( (Queue) collection ).contains( entity );
    }

    @Override
    public Object indexOf(Object collection, Object entity) {
        int i = ( (List) collection ).indexOf( entity );
        return ( i < 0 ) ? null : i;
    }

    @Override
    public Object replaceElements(
            Object original,
            Object target,
            CollectionPersister persister,
            Object owner,
            Map copyCache,
            SharedSessionContractImplementor session)
            throws HibernateException {
        Queue result = (Queue) target;
        result.clear();
        result.addAll( (Queue) original );
        return result;
    }

    @Override
    public Object instantiate(int anticipatedSize) {
        return new LinkedList<>();
    }

}

public class PersistentQueue extends PersistentBag implements Queue {

    public PersistentQueue(SharedSessionContractImplementor session) {
        super( session );
    }

    public PersistentQueue(SharedSessionContractImplementor session, List list) {
        super( session, list );
    }

    @Override
    public boolean offer(Object o) {
        return add(o);
    }

    @Override
    public Object remove() {
        return poll();
    }

    @Override
    public Object poll() {
        int size = size();
        if(size > 0) {
            Object first = get(0);
            remove( 0 );
            return first;
        }
        throw new NoSuchElementException();
    }

    @Override
    public Object element() {
        return peek();
    }

    @Override
    public Object peek() {
        return size() > 0 ? get( 0 ) : null;
    }
}

Java Directories example

Java Directories example


DirectoryStream

• To iterate over the entries in a directory
• Scales to large directories
• Optional filter to decide if entries should be accepted or filtered
• Built-in filters to match file names using glob or regular expression

Path dir = Path.get("mydir");
DirectoryStream stream = dir.newDirectoryStream("*.java");
try {
for (DirectoryEntry entry: stream) {
System.out.println(entry.getName());
}
} finally {
stream.close();
}


Path dir = Path.get("mydir");
Files.withDirectory(dir, "*.java", new DirectoryAction() {
public void invoke(DirectoryEntry entry) {
System.out.println(entry.getName());
}
});

DirectoryStreamFilters

• Factory methods for useful filters
• newContentTypeFilter
• Accept entry based on its MIME type
• Use installed file type detectors
• Combine filters into simple expressions

Files.walkFileTree utility method

• Recursively descends directory hierarchy rooted at starting file
• Easy to use internal iterator
• FileVisitor invoked for each directory/file in hierarchy
• Options to control if symbolic links are followed, maximum depth...
• Use to implement recursive copy, move, delete, chmod...


newFileAttributeView method

• selects view by class literal that works as type-token
• method returns instance of view
BasicFileAttributeView view =
file.newFileAttributeView(BasicFileAttributeView.class, true);
// bulk read of basic attributes
BasicFileAttributes attrs = view.readAttributes();



DosFileAttributeView

• Provides access to legacy DOS attributes
• Implementable "server-side" on non-Windows platforms
AclFileAttributeView
• Provides access to Access Control Lists (ACLs)
• Based on NFSv4 ACL model
NamedAttributeView
• Provides access to attributes as name/value pairs
• Mappable to file systems that support named subfiles


PosixFileAttributeView
• Unix equivalent of stat, chmod, chown...
PosixFileAttributes attrs =
Attributes.readPosixFileAttributes(file, true);
String mode = PosixFilePermission.toString(attrs.permissions());
System.out.format("%s %s %s", mode, attrs.owner(), attr.group());
rwxrw-r-- alanb java
import static java.nio.file.attribute.PosixFilePermission.*;
Attributes.setPosixFilePermissions(file,
OWNER_READ, OWNER_WRITE, GROUP_WRITE, OTHERS_READ);

File change notification

WatchService watcher = FileSystems.getDefault().newWatchService();
Set<StandardWatchEventType> events =
EnumSet.of(ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY);
WatchKey key = dir.register(watcher, events);
for (;;) {
// wait for key to be signalled
key = watcher.take();
// process events
for (WatchEvent<?> ev: key.pollEvents()) {
if (event.getType() == ENTRY_MODIFY) {
:
}
}
// reset key
key.reset();
}

Java Copying and Moving Files using path

Java Copying and Moving Files using path


copyTo method to copy file to target location
• Options to replace existing file, copy file attributes...

moveTo method to move file to target location
• Option to replace existing file
• Option to require operation to be atomic

import static java.nio.file.StandardCopyOption.*;
Path source = Path.get("C:\\My Documents\\Stuff.odp");
Path target = Path.get("D:\\Backup\\MyStuff.odp");
source.copyTo(target);
source.copyTo(target, REPLACE_EXISTING, COPY_ATTRIBUTES);

Java - File Change Notification (WatchService)

Java - File Change Notification (WatchService)


• Improve performance of applications that are forced to poll the file system today
• WatchService
– Watch registered objects for events and changes
– Makes use of native event facility where available
– Supports concurrent processing of events
• Path implements Watchable
– Register directory to get events when entries are created, deleted, or modified


WatchService API


• WatchKey represents registration of a watchable object with a WatchService.
// register
WatchKey key = file.register(watcher, ENTRY_CREATE, ENTRY_MODIFY);
// cancel registration
key.cancel();

WatchKey represents registration of a watchable object with a WatchService.
• WatchKey signalled and queued when event detected
• WatchService poll or take methods used to retrieve signalled key.

class WatchService {
 WatchKey take();
 WatchKey poll();
 WatchKey poll(long timeout, TimeUnit unit);

WatchKey represents registration of a watchable object with a WatchService.
• WatchKey signalled and queued when event detected
• WatchService poll or take methods used to retrieve signalled keys
• Process events
• WatchKey reset method returns key to ready state

Java AsynchronousSocketChannel

Java AsynchronousSocketChannel

Using Future


Example:
AsynchronousSocketChannel ch = ...
ByteBuffer buf = ...
Future<Integer> result = ch.read(buf);

Example:
AsynchronousSocketChannel ch = ...
ByteBuffer buf = ...
Future<Integer> result = ch.read(buf);
// check if I/O operation has completed
boolean isDone = result.isDone();

Example:
AsynchronousSocketChannel ch = ...
ByteBuffer buf = ...
Future<Integer> result = ch.read(buf);
// wait for I/O operation to complete
int nread = result.get();

Example:
AsynchronousSocketChannel ch = ...
ByteBuffer buf = ...
Future<Integer> result = ch.read(buf);
// wait for I/O operation to complete with timeout
int nread = result.get(5, TimeUnit.SECONDS);

CompletionHandler
> V = type of result value
> A = type of object attached to I/O operation
● Used to pass context
● Typically encapsulates connection context
> completed method invoked if success
> failed method invoked if I/O operations fails


Using CompletionHandler
class Connection { … }
class Handler implements CompletionHandler<Integer,Connection> {
 public void completed(Integer result, Connection conn) {
 int nread = result;
 // handle result
 }
 public void failed(Throwable exc, Connection conn) {
 // error handling
 }
}

Using CompletionHandler
class Connection { … }
class Handler implements CompletionHandler<Integer,Connection> {
 public void completed(Integer result, Connection conn) {
 // handle result
 }
 public void failed(Throwable exc, Connection conn) {
 // error handling
 }
}
AsynchronousSocketChannel ch = ...
ByteBuffer buf = ...
Connection conn = ...
Handler handler = ...
ch.read(buf, conn, handler);


AsynchronousSocketChannel.read()
Once a connection has been accepted, it is now time to
read some bytes:
AsynchronousSocketChannel.read(ByteBuffer b,
 Attachement a,
CompletionHandler<> c);
> Hey Hey  You see the problem, right?
> Who remember when I was scared by the
SelectionKey.attach()?

Trouble trouble trouble:
– Let’s say you get 10 000 accepted connections
– Hence 10 000 ByteBuffer created, and the read
operations get invoked
– Now we are waiting, waiting, waiting, waiting for
the remote client(s) to send us bytes (slow clients/
network)
– Another 10 000 requests comes in, and we are
again creating 10 000 ByteBuffer and invoke the
read() operations.
> BOOM!

2020-04-27

Durga Sir Core Java SCJP/OCJP Notes

Durga Sir is one of the best teacher in hyderabad not only for learning and attitude wise also. Core java and Advanced java notes by durga Soft.

2020-04-25

Java Predicate Example – Predicate Filter

In mathematics, predicateUsually understood asboolean-valued function 'P: X? {true, false}' 'P: X? {true, false}', Called the predicate on X. You can think of it as an operator or function that returns trueor falsevalues.

Java 8 Predicates Usage
In Java 8, PredicateYesfunctional interface , So it can be used aslambda expressionOr the allocation target referenced by the method. So, what do you think, we can use these true / false return functions in daily programming? I will say that you can use predicates anywhere you need to evaluate conditions on groups / collections of similar objects so that evaluation can lead to true or false.

For example, you canrealtime usecasesUse caserealtime usecases

Find all children born after a certain date
Pizza set a specific time
Employees over a certain age, etc.
Java Predicate Class
therefore, java predicatesIt seems to be an interesting thing. Let's go deeper.

As I said, Predicateyesfunctional interface. This means that we can pass lambda expressions wherever we need predicates. For example, one such method isStreamfilter()Methods in the interface .

/**
 * Returns a stream consisting of the elements of this stream that match
 * the given predicate.
 *
 * <p>This is an <a href="package-summary.html#StreamOps">intermediate
 * operation</a>.
 *
 * @param predicate a non-interfering stateless predicate to apply to each element to determine if it
 * should be included in the new returned stream.
 * @return the new stream
 */
Stream<T> filter(Predicate<? super T> predicate);
We can assume the stream as a mechanism to create a sequence of elements that supports sequential and parallel aggregation operations. This means that we can collect and perform certain operations on all elements present in the stream at any time with one call.

So, essentially, we can use streams and predicates to –

First filter certain elements from the group, then
Then perform some operations on the filtered elements.
Using Predicate on a collection
To demonstrate, we have a Employeeclass as follows:


Employee.java

package predicateExample;

public class Employee {
   
   public Employee(Integer id, Integer age, String gender, String fName, String lName){
       this.id = id;
       this.age = age;
       this.gender = gender;
       this.firstName = fName;
       this.lastName = lName;
   }
   
   private Integer id;
   private Integer age;
   private String gender;
   private String firstName;
   private String lastName;

   //Please generate Getter and Setters

   //To change body of generated methods, choose Tools | Templates.
    @Override
    public String toString() {
        return this.id.toString()+" - "+this.age.toString();
    }
}
1. All Employees who are male and age more than 21

public static Predicate<Employee> isAdultMale()
{
    return p -> p.getAge() > 21 && p.getGender().equalsIgnoreCase("M");
}
2. All Employees who are female and age more than 18

public static Predicate<Employee> isAdultFemale()
{
    return p -> p.getAge() > 18 && p.getGender().equalsIgnoreCase("F");
}
3. All Employees whose age is more than a given age

public static Predicate<Employee> isAgeMoreThan(Integer age)
{
    return p -> p.getAge() > age;
}
You can build more of them when needed. So far, so good. So far, I have beenEmployeePredicates.javaThe above three methods are included:

EmployeePredicates.java

package predicateExample;

import java.util.List;
import java.util.function.Predicate;
import java.util.stream.Collectors;

public class EmployeePredicates
{
    public static Predicate<Employee> isAdultMale() {
        return p -> p.getAge() > 21 && p.getGender().equalsIgnoreCase("M");
    }
   
    public static Predicate<Employee> isAdultFemale() {
        return p -> p.getAge() > 18 && p.getGender().equalsIgnoreCase("F");
    }
   
    public static Predicate<Employee> isAgeMoreThan(Integer age) {
        return p -> p.getAge() > age;
    }
   
    public static List<Employee> filterEmployees (List<Employee> employees,
                                                Predicate<Employee> predicate)
    {
        return employees.stream()
                    .filter( predicate )
                    .collect(Collectors.<Employee>toList());
    }

You will see that I created another utility methodfilterEmployees()To showjava predicate filter. Basically it makes the code neat and reduces duplication. You can also write multiple predicates to formpredicate chain , Just like inbuilder patternDid that.

Therefore, in this function, we pass the employeeslist, and pass the predicate, then this function will return to meetparameter predicateNew employeescollection of mentioned conditions .


TestEmployeePredicates.java

package predicateExample;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import static predicateExample.EmployeePredicates.*;

public class TestEmployeePredicates
{
    public static void main(String[] args)
    {
        Employee e1 = new Employee(1,23,"M","Rick","Beethovan");
        Employee e2 = new Employee(2,13,"F","Martina","Hengis");
        Employee e3 = new Employee(3,43,"M","Ricky","Martin");
        Employee e4 = new Employee(4,26,"M","Jon","Lowman");
        Employee e5 = new Employee(5,19,"F","Cristine","Maria");
        Employee e6 = new Employee(6,15,"M","David","Feezor");
        Employee e7 = new Employee(7,68,"F","Melissa","Roy");
        Employee e8 = new Employee(8,79,"M","Alex","Gussin");
        Employee e9 = new Employee(9,15,"F","Neetu","Singh");
        Employee e10 = new Employee(10,45,"M","Naveen","Jain");
       
        List<Employee> employees = new ArrayList<Employee>();
        employees.addAll(Arrays.asList(new Employee[]{e1,e2,e3,e4,e5,e6,e7,e8,e9,e10}));
               
        System.out.println( filterEmployees(employees, isAdultMale()) );
       
        System.out.println( filterEmployees(employees, isAdultFemale()) );
       
        System.out.println( filterEmployees(employees, isAgeMoreThan(35)) );
       
        //Employees other than above collection of "isAgeMoreThan(35)"
        //can be get using negate()
        System.out.println(filterEmployees(employees, isAgeMoreThan(35).negate()));
    }
}

Output:

[1 - 23, 3 - 43, 4 - 26, 8 - 79, 10 - 45]
[5 - 19, 7 - 68]
[3 - 43, 7 - 68, 8 - 79, 10 - 45]
[1 - 23, 2 - 13, 4 - 26, 5 - 19, 6 - 15, 9 - 15]
The predicate is indeed a very good addition in Java 8, I will use it whenever possible.

Final Thoughts on Predicates in Java 8
They move your conditions (sometimes business logic) to a central location. This helps to unit test them separately.
No need to copy any changes to multiple locations. Java predicates improve code maintenance.
Compared with writing if-else blocks, code such as "filterEmployees (employees, isAdultFemale ())" is more readable.
Okay, these are my thoughts in Java 8 predicates. What do you think of this feature? Share with all of us in the comments section.

Difference between @Controller and @RestController

It is obvious from the previous section that it @RestControlleris a convenient comment, which only adds @Controller and@ResponseBodyNotes .

The main difference between traditional MVC @Controllerand RESTful Web services @RestControlleris the way in which the HTTP response body is created. The rest controller does not rely on view technology to perform the rendering of server-side data to HTML, but simply fills and returns the domain object itself.

Object data will be written directly to the HTTP response in the form of JSON or XML, and parsed by the client to further process it to modify the existing view or for any other purpose.

Using @Controller in spring mvc application
@Controller example without @ResponseBody

@Controller
@RequestMapping("employees")
public class EmployeeController
{
    @RequestMapping(value = "/{name}", method = RequestMethod.GET)
    public Employee getEmployeeByName(@PathVariable String name, Model model) {

        //pull data and set in model

        return employeeTemplate;
    }
}

Using @Controller with @ResponseBody in spring
@Controller example with @ResponseBody

@Controller
@ResponseBody
@RequestMapping("employees")
public class EmployeeController
{
    @RequestMapping(value = "/{name}", method = RequestMethod.GET, produces = "application/json")
    public Employee getEmployeeByName(@PathVariable String name) {

        //pull date

        return employee;
    }
}

Using @RestController in spring
@RestController example

@RestController
@RequestMapping("employees")
public class EmployeeController
{
    @RequestMapping(value = "/{name}", method = RequestMethod.GET, produces = "application/json")
    public Employee getEmployeeByName(@PathVariable String name) {

        //pull date

        return employee;
    }
}

ELK tutorial

What is ELK?
ELK is a combination of 3 open source products:


  • Elasticsearch
  • Logstash
  • Kibana

All developed and maintained by Elastic .

Elasticsearch is a NoSQL database based on the Lucene search engine. Logstash is a log pipeline tool that accepts data input, performs data conversion, and then outputs data. Kibana is an interface layer that works on top of Elasticsearch. In addition, the ELK stack also contains a series of log collector tools called Beats.

The most common usage scenario of ELK is as a log system for Internet products. Of course, the ELK stack can also be used in other aspects, such as: business intelligence, big data analysis, etc.

Why use ELK?
The ELK stack is very popular, because it is powerful, open source and free. For smaller companies such as SaaS companies and startups, using ELK to build a log system is very cost-effective.

Netflix, Facebook, Microsoft, LinkedIn and Cisco also use ELK to monitor logs.

Why use a logging system?
The log system records all aspects of system operation and business processing, and plays an increasingly important role in troubleshooting, business analysis, data mining, and big data analysis.

ELK architecture
The various components in the ELK stack cooperate with each other and do not require much additional configuration. Of course, the architecture will be different for different use scenarios.

For small development environments, the architecture is usually as follows:


For a production environment with a large amount of data, other parts may be added to the log architecture, for example: to improve elasticity (add Kafka, RabbitMQ, Redis) and security (add nginx):



ELK install Elasticsearch

The ELK stack should install the following open source components:


  • Elasticsearch
  • Kibana
  • Beats
  • Logstash (optional) Logstash is optional.


Install Elasticsearch

Elasticsearch is a near real-time full-text search engine, which has multiple uses, such as a log system.

To download and install Elasticsearch, open a command line terminal and execute the following commands (deb for Debian / Ubuntu, rpm for Redhat / Centos / Fedora, mac for OS X, linux for linux, win for Windows):

deb:

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-amd64.deb
sudo dpkg -i elasticsearch-7.1.0-amd64.deb
sudo /etc/init.d/elasticsearch start

rpm:

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-x86_64.rpm
sudo rpm -i elasticsearch-7.1.0-x86_64.rpm
sudo service elasticsearch start

mac:

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-darwin-x86_64.tar.gz
tar -xzvf elasticsearch-7.1.0-darwin-x86_64.tar.gz
cd elasticsearch-7.1.0
./bin/elasticsearch

linux:

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-linux-x86_64.tar.gz
tar -xzvf elasticsearch-7.1.0-linux-x86_64.tar.gz
cd elasticsearch-7.1.0
./bin/elasticsearch

win:

Download the Elasticsearch 7.1.0 Windows zip file from the Elasticsearch download page.
Extract the contents of the zip file to a directory, for example: C: \ Program Files.
Open a command line window as an administrator and switch to the unzipped directory, for example:
cd C:\Program Files\elasticsearch-7.1.0

Start Elasticsearch:
shell
bin\elasticsearch.bat
Confirm Elasticsearch start
To confirm whether the Elasticsearch service is started, you can access port 9200.

curl http://127.0.0.1:9200

On Windows, if cURL is not installed, you can open the above URL with a browser.

If everything is normal, you can see the following response:

{
  "name" : "qikegu",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "qZk2EjpQRDiYYyhccomWyw",
  "version" : {
    "number" : "7.1.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "606a173",
    "build_date" : "2019-05-16T00:43:15.323135Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}


ELK install Kibana


Kibana is an interface component used with Elasticsearch. You can use Kibana to search and view the data in Elasticsearch. You can use Kibana to easily perform various complex data analysis and display the data in various charts and tables.

It is recommended to install Kibana and Elasticsearch on the same server, but this is not required. If it is installed on a different server, you need to kibana.ymlmodify the Elasticsearch server URL (IP: PORT) in the Kibana configuration file .

To download and install Kibana, open a command line window and execute the following command:

deb, rpm, or linux:

curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-7.1.0-linux-x86_64.tar.gz
tar xzvf kibana-7.1.0-linux-x86_64.tar.gz
cd kibana-7.1.0-linux-x86_64/
./bin/kibana

mac:

curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-7.1.0-darwin-x86_64.tar.gz
tar xzvf kibana-7.1.0-darwin-x86_64.tar.gz
cd kibana-7.1.0-darwin-x86_64/
./bin/kibana

win:

Download the Kibana 7.1.0 Windows zip file from the Kibana download page .
Extract the contents of the zip file to a directory, for example C:\Program Files.
Open a command line window as an administrator and switch to the unzipped directory, for example:
cd C:\Program Files\kibana-7.1.0-windows

Start Kibana:
bin\kibana.bat

To learn more about the installation, configuration and operation Kibana more information, please refer to the official website .

Enter Kibana interface

The port of Kibana is 5601. To enter the Kibana web interface, use a browser to access the Kibana website, for example: http://127.0.0.1:5601.

If you cannot connect from the outside, you can usually do the following:

In the configuration file kibana.yml, server.hostmodify the configuration items to:, server.host: "0.0.0.0"indicating that remote connection is allowed. The default binding localhostmeans that only local connections are allowed.
In the firewall, open port 5601. The following example is under CentOS, the firewall opens port 5601:
    [root@qikegu ~]# firewall-cmd --permanent --add-port=5601/tcp
    success
    [root@qikegu ~]# firewall-cmd --reload
    success
    [root@qikegu ~]# firewall-cmd --list-ports
    5601/tcp


ELK install Beat
 

Beat is a data collection tool that is installed on the server and sends the collected data to Elasticsearch. Beat can directly send data to Elasticsearch, or it can be sent to Logstash first, and then processed by Logstash before being sent to Elasticsearch.

Each Beat is an independently installable product. This tutorial will learn how to install and run Metricbeat, and how to enable the Metricbeat system module to collect system metrics.

To learn more about Beat, please refer to the relevant documents:

Beat type Crawl
Auditbeat Audit data
Filebeat Log file
Functionbeat Cloud data
Heartbeat Availability monitoring
Journalbeat Systemd journals
Metricbeat Operating indicators, such as system operating indicators
Packetbeat Network traffic
Winlogbeat Windows Event Log
Install Metricbeat
To download and install Metricbeat, open a command line window and execute the following command:

deb:

curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.1.0-amd64.deb
sudo dpkg -i metricbeat-7.1.0-amd64.deb

rpm:

curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.1.0-x86_64.rpm
sudo rpm -vi metricbeat-7.1.0-x86_64.rpm

mac:

curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.1.0-darwin-x86_64.tar.gz
tar xzvf metricbeat-7.1.0-darwin-x86_64.tar.gz

linux:

curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.1.0-linux-x86_64.tar.gz
tar xzvf metricbeat-7.1.0-linux-x86_64.tar.gz

win:

Download the Metricbeat Windows zip file from the Metricbeat download page .
Extract the contents of the zip file to C:\Program Files
Rename the Metricbeat-7.1.0-windows directory to Metricbeat.
Open the PowerShell command line as an administrator (right-click the PowerShell icon and select Run as Administrator).
On the PowerShell command line, run the following command to install Metricbeat as a Windows service:
    PS > cd 'C:\Program Files\Metricbeat'
    PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1

Collect system running indicators and send to Elasticsearch
Metricbeat provides some preset monitoring modules, which can be deployed directly by turning on the switch.

This section will use systempreset modules, which can be used to collect system operation indicators, such as: CPU usage, memory, file system, disk IO and network IO statistics, and process statistics.

Before you start : Make sure that Elasticsearch and Kibana are running and Elasticsearch is ready to receive Metricbeat data.

Enable the systemmodule and start collecting system indicators:

From the Metricbeat installation directory, enable the system module:
deb and rpm:

sudo metricbeat modules enable system

mac and linux:

./metricbeat modules enable system

win:

PS C:\Program Files\Metricbeat> .\metricbeat.exe modules enable system

Set up the initial environment:
deb and rpm:

sudo metricbeat setup -e

mac and linux:

./metricbeat setup -e
copy
win:

PS C:\Program Files\Metricbeat> metricbeat.exe setup -e

Start Metricbeat:
deb and rpm:

sudo service metricbeat start

mac and linux:

./metricbeat -e

win:

PS C:\Program Files\Metricbeat> Start-Service metricbeat

Metricbeat starts and starts sending system data to Elasticsearch.

View system indicators in Kibana
The browser opens the URL: http: // <your URL>: 5601 / app / kibana # / dashboard / Metricbeat-system-overview-ecs

If you do not see the data in Kibana, please try to enlarge the time range. By default, Kibana displays the last 15 minutes. If you see an error, make sure Metricbeat is running, then refresh the page.


Click Host Overviewto view detailed indicators of the selected host.



So far, we have built a basic ELK architecture and successfully collected system information.


ELK installs Logstash


In ELK, Logstash is not required to be installed.

Logstash is a powerful tool that provides a large number of plug-ins for parsing and processing various data from data sources. If the data collected by Beat needs to be processed before it can be used, it is necessary to integrate Logstash.

To download and install Logstash, open a command line window and execute the following command:

Logstash relies on Java 8 or Java 11 to ensure that Java is installed.

[root@qikegu ~]# java --version
openjdk 11.0.3 2019-04-16 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.3+7-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.3+7-LTS, mixed mode, sharing)

deb:

curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-7.1.0.deb
sudo dpkg -i logstash-7.1.0.deb

rpm:

curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-7.1.0.rpm
sudo rpm -i logstash-7.1.0.rpm

mac and linux:

curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-7.1.0.tar.gz
tar -xzvf logstash-7.1.0.tar.gz

win:

Download the Logstash 7.1.0 Windows zip file from the Logstash download page .
Extract the contents of the zip file to a directory, for example C:\Program Files.
To learn more about installing, configuring, and running Logstash, please read the official website documentation .

Configure Logstash to listen for Beats input
Logstash provides an input plugin for accepting data input. In this tutorial, you will create a Logstash pipeline configuration to listen for Beat input and send the received data to Elasticsearch.

Configure Logstash
Create a new Logstash pipeline configuration file and name it demo-metrics-pipeline.conf. If you install Logstash as a deb or rpm package, /etc/logstash/conf.d/create a file in the Logstash configuration directory (for example:) .

The file must contain:

Enter the configuration and set the beat port to 5044
Output configuration, configure elasticsearch related information
Examples:

input {
  beats {
    port => 5044
  }
}

# The filter part of this file is commented out to indicate that it
# is optional.
# filter {
#
# }

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

When using this pipeline configuration to start Logstash, Beat events will be sent to Elasticsearch through Logstash. In Logstash, you can use Logstash's powerful functions to analyze and process data.

Start Logstash
Start Logstash. If you install Logstash as a deb or rpm package, make sure that the configuration file is in the configuration directory.

deb:

sudo /etc/init.d/logstash start

rpm:

sudo service logstash start

mac and linux:

./bin/logstash -f demo-metrics-pipeline.conf

win:

./bin/logstash -f demo-metrics-pipeline.conf

Logstash began to listen to events sent by Beat. Next, you need to configure Metricbeat to send events to Logstash.

Configure Metricbeat to send events to Logstash
By default, Metricbeat sends events to Elasticsearch.

To send events to Logstash, you need to modify the configuration file metricbeat.yml. This file can be found in the Metricbeat installation directory, or /etc/metricbeat(rpm and deb).

Comment out the output.elasticsearchpart and enable the output.logstashpart:

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

...

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

Save the file and restart Metricbeat for the configuration changes to take effect.

Define filters to extract data from fields
Currently, Logstash just forwards the event to Elasticsearch, without processing it. Next, you will learn to use filters.

The system data collected by Metricbeat includes a cmdlinefield named , which contains the complete command line parameters for the process start. E.g:

"cmdline": "/Applications/Firefox.app/Contents/MacOS/plugin-container.app/Contents/MacOS/plugin-container -childID 3
-isForBrowser -boolPrefs 36:1|299:0| -stringPrefs 285:38;{b77ae304-9f53-a248-8bd4-a243dbf2cab1}| -schedulerPrefs
0001,2 -greomni /Applications/Firefox.app/Contents/Resources/omni.ja -appomni
/Applications/Firefox.app/Contents/Resources/browser/omni.ja -appdir
/Applications/Firefox.app/Contents/Resources/browser -profile
/Users/dedemorton/Library/Application Support/Firefox/Profiles/mftvzeod.default-1468353066634
99468 gecko-crash-server-pipe.99468 org.mozilla.machname.1911848630 tab"
copy
You may only need the path of the command instead of sending the entire command line parameter to Elasticsearch. One way is to use Grok filters, learning Grok is beyond the scope of this tutorial, but if you want to learn more, please refer to the Grok filter plugin documentation .

To extract the path, in the Logstash configuration file created earlier, between the input and output sections, add the following Grok filter:

filter {
  if [system][process] {
    if [system][process][cmdline] {
      grok {
        match => {
          "[system][process][cmdline]" => "^%{PATH:[system][process][cmdline_path]}"
        }
        remove_field => "[system][process][cmdline]"
      }
    }
  }
}

Use a pattern to match the path, and then store the path in cmdline_patha field named .
The original field is deleted cmdline, so it is not indexed in Elasticsearch.
When complete, the complete configuration file should look like this:

input {
  beats {
    port => 5044
  }
}

filter {
  if [system][process] {
    if [system][process][cmdline] {
      grok {
        match => {
          "[system][process][cmdline]" => "^%{PATH:[system][process][cmdline_path]}"
        }
        remove_field => "[system][process][cmdline]"
      }
    }
  }
}

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "qikegu-%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

Restart Logstash to make the configuration take effect. The event now contains a cmdline_pathfield named , containing the path of the command:

"cmdline_path": "/Applications/Firefox.app/Contents/MacOS/plugin-container.app/Contents/MacOS/plugin-container"

Tomcat startup memory settings

Tomcat startup memory settings  

The startup of Tomcat is divided into startupo.bat startup and registration as the startup of the windows service, which are explained one by one below.

1.startup.bat start


Find catalina.bat in the tomcat_home / bin directory, open it with a text editor, and add the following line:

set JAVA_OPTS = -Xms1024M -Xmx1024M -XX: PermSize = 256M -XX: MaxNewSize = 256M -XX: MaxPermSize = 256M

Explain the parameters:

-Xms1024M: Initialize the heap memory size (note that if M is not added, the unit is KB)

-Xmx1029M: Maximum heap memory size

-XX: PermSize = 256M: Initialize class loading memory pool size

-XX: MaxPermSize = 256M: Maximum class loading memory pool size

-XX: MaxNewSize = 256M: This is unclear, there is a saying

There is also a -server parameter, which means that the server starts when the jvm is started, which is slower than the client, but the performance is better. You can choose it yourself.

2. The windows service starts


       If your tomcat is registered as a windows service and started as a service, the above method is invalid, because then tomcat starts to read the parameters of the registry instead of reading the parameters of the batch file. We have two ways to set jvm parameters.

The first is relatively simple, tomcat provides us with a form to set the startup parameters, double-click tomcat6w.exe in the tomcat_home / bin directory, as shown in the figure

Tomcat startup memory settings-insomnia alone-craving

The Initial memory pool below is the initial heap memory size, and the Maximun memory pool is the maximum heap memory size.

To set the size of the Perm Gen pool, you need to add parameters in the Java Option, add:

-Dcatalina.base =% tomcat_home%

-Dcatalina.home =% tomcat_home%

-Djava.endorsed.dirs =% tomcat_home% \ endorsed

-Djava.io.tmpdir =% tomcat_home% \ temp

-XX: PermSize = 256M

-XX: MaxPermSize = 256M

-XX: ReservedCodeCacheSize = 48M

-Duser.timezone = GMT + 08

(PS: There is no space after each line on the Internet, I have not tried it)

The second method is to open the registry-> HKEY_LOCAL_MACHINE \ SOFTWARE \ Apache Software Foundation \ Procrun 2.0 \ Tomcat6 \ Parameters \ Java (the path may be a little different)

Tomcat startup memory settings-insomnia alone-craving
Modify the value of Options and add the above parameters to OK. (Do n’t forget to back up the registry first)

2020-04-24

Setting up Swagger in Spring REST API

Setting Up Swagger with a Spring REST API

1. Overview Overview

Good documentation is important when creating REST APIs.

Moreover, whenever you change the API, you must specify the same in the reference documentation. It is very tedious to reflect this manually, so automating it is essential.

In this tutorial we will  look at Swagger 2 for a Spring REST web service . In this document, we will use Springfox, the implementation of the Swager 2 specification.

If you are not familiar with Swagger, we recommend that you visit the official webpage before reading this article  .

2.  Target Project

REST service creation is not a category of this document, so users should already have a project that can be easily applied. If not already there, the following link would be a good starting point:

Build a REST API with Spring 4 and Java Config article
Building a RESTful Web Service .

3. Adding the Maven Dependency

As mentioned above, we will use the Spring Fox implementation of the Swager specification. Add the following dependency to the Maven project's pom.xml file.

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger2</artifactId>
    <version>2.2.2</version>
</dependency>

4.  Integrating Swagger 2 into the Project

Java settings Java Configuration
Swagger's configuration is mainly   centered on the Docket bean configuration:

@Configuration
@EnableSwagger2
public class SwaggerConfig {                                   
    @Bean
    public Docket api() {
        return new Docket(DocumentationType.SWAGGER_2) 
          .select()                                 
          .apis(RequestHandlerSelectors.any())             
          .paths(PathSelectors.any())                         
          .build();                                         
    }
}
Swager  2 is  enabled with the @ EnableSwagger2 annotation setting.

 After defining the Docket bean, its  select ()  method   returns an ApiSelectorBuilder instance, which provides one way to control the endpoint endpoint exposed by the Swagger .

The description for selection of  RequestHandler s  can be set  with the help of RequestHandlerSelectors and  PathSelectors  . If you use both  any ()  , your entire API will be documented through Swagger.

This setup allows you to integrate Swagger 2 into your existing Spring Boot project. Other Spring projects require some extra work.

Setting up without a spring boot Configuration Without Spring Boot
If you don't use Spring Boot, users won't benefit from autoconfiguration of resource handlers. The Swagger UI adds a set of resources, which the  user  must configure as a part  and a class that extends  WebMvcConfigurerAdapter with the @EnableWebMvc  annotation .

@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
    registry.addResourceHandler("swagger-ui.html")
      .addResourceLocations("classpath:/META-INF/resources/");

    registry.addResourceHandler("/webjars/**")
      .addResourceLocations("classpath:/META-INF/resources/webjars/");
}


Verification  Verification
To verify that Spring Fox works, try the following URL using a browser:

http: // localhost: 8080 / your-app-root / v2 / api-docs

You can see a lot of JSON responses with key-value pairs with very low readability. Fortunately, Swagger provides a Swagger UI for this.

5. Swagger UI


The Swagger UI is a built-in solution that makes it easier for the user to create API documents generated by Swagger.
Seuwegeo spring Fox to use UI  Enabling Springfox's UI SWAGGER
To use the Swager UI, you need one additional Maven dependency:

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger-ui</artifactId>
    <version>2.2.2</version>
</dependency>
Now the user can test it by accessing the following link in the browser-  http: // localhost: 8080 / your-app-root / swagger-ui.html

The following page will be displayed:

Seuwegeo article Tour  Exploring Swagger Documentation
Now you will see a list of all controllers defined in your application. Clicking on one of them will list the valid HTTP methods  ( DELETE ,  GET ,  HEAD ,  OPTIONS ,  PATCH , POST ,  PUT ) .

Extending each method provides additional useful data, such as response status , response status , content-type, and parameter list. You can also test each method through the UI.

The ability of Swager to be synchronized with the user's code is very important. To demonstrate this, let's add a new controller to the application:

@RestController
public class CustomController {

    @RequestMapping(value = "/custom", method = RequestMethod.POST)
    public String custom() {
        return "custom";
    }
}
Now if you refresh the Swagger documentation, you  should see custom-controller in the controller list  . As you created above, only one POST method will be shown.

6. Advanced Configuration


In the Docket  bean of the user application,  more configuration is possible for the user to generate the API document.

The response code seuwegeo filtering API  Filtering API for SWAGGER's Response,

Documenting the entire user API is usually not desirable. The user can limit Swager's response code by  putting parameters in apis ()  and paths () of the  Docket  class  .

As seen above,  RequestHandlerSelectors  allow  any  or  none  description, but   you can also filter the API based on base package, class annotation, and method annotations.

PathSelectors provide additional filtering through statements that scan the request paths  of the user application . Any () ,  none (),  regex () , or ant ()  can be used.

In the example below, we use the ant ()  predicate to make  Swagger  include only certain packages and controllers in a specific path.

@Bean
public Docket api() {               
    return new Docket(DocumentationType.SWAGGER_2)         
      .select()                                     
      .apis(RequestHandlerSelectors.basePackage("org.baeldung.web.controller"))
      .paths(PathSelectors.ant("/foos/*"))                   
      .build();
}

Custom info  Custom Information
Seuwegeo also a response code to the user information, such as the "Api Documentation", "Created by  Contact Email", "Apache 2.0" that can be customized  helps provide some default.

To modify these values, you can use the  apiInfo (ApiInfo apiInfo)  method. The ApiInfo  class contains custom information about the API.

@Bean
public Docket api() {               
    return new Docket(DocumentationType.SWAGGER_2)         
      .select()
      .apis(RequestHandlerSelectors.basePackage("com.example.controller"))
      .paths(PathSelectors.ant("/foos/*"))
      .build()
      .apiInfo(apiInfo());
}

private ApiInfo apiInfo() {
    ApiInfo apiInfo = new ApiInfo(
      "My REST API",
      "Some custom description of API.",
      "API TOS",
      "Terms of service",
      "myeaddress@company.com",
      "License of API",
      "API license URL");
    return apiInfo;
}

Custom method response messages  Custom Methods Response Messages
Swagger allows  globally overriding response messages of HTTP methods  through HTTP method through Docket 's  globalResponseMessage ()  method  . First, we need to configure Swagger not to use the default response message.

 To override the 500  and 403response messages  for all GET methods,  you need to add some code to theDocket initialization block. (Initialization code was excluded here):


.useDefaultResponseMessages(false)                                 
.globalResponseMessage(RequestMethod.GET,                   
  newArrayList(new ResponseMessageBuilder() 
    .code(500)
    .message("500 message")
    .responseModel(new ModelRef("Error"))
    .build(),
    new ResponseMessageBuilder()
      .code(403)
      .message("Forbidden!")
      .build()));

7. Conclusion Conclusion

In this tutorial we set up Swagger 2, which generates documentation for the Spring REST API. We also looked at ways to visualize and customize Swagger's output.

The full implementation of this tutorial can be found in the following github project -it's Eclipse-based and can be easily imported and returned.

Asynchronous processing with @Async in Spring

Asynchronous processing with @Async in Spring

1.Overview Overview

In this article, we will look at asynchronous execution support and @Async annotation in Spring . Simply put, if you put the @Async annotation in an empty bean , it will be executed in a separate thread. For example, the caller does not have to wait for the called method to complete.

2. Turn on Async function  Enable Async Support

Java settings Java configuration asynchronous processing by enabling asynchronous processing Just add the @EnableAsync to simply set the class to write :

@Configuration
@EnableAsync
public class SpringAsyncConfig { ... }
The above annotation is sufficient, but you can also set some options you need:

annotation  – default, @EnableAsync  detectsSpring's Async annotations and EJB 3.1 javax.ejb.Asynchronous ; Other annotations customized with this option can also be detected.
mode  – indicatesthe type of advice to use-based on JDK proxy or AspectJ weaving.
proxyTargetClass  -indicates the type of proxy to use-CGLIB or JDK; This property value is valid only when mode is set to AdviceMode.PROXY .
order  – sets the order in which  AsyncAnnotationBeanPostProcessor  should apply; It isrun from the end by default toonly  consider all existing proxies  .
Asynchronous processing can also be enabled via XML configuration using the task namespace :

<task:executor id="myexecutor" pool-size="5"  />
<task:annotation-driven executor="myexecutor"/>

3.  The  @Async  Annotation

Let's look at the rules  first-@Async  has two limitations:

1. Apply only to public methods
2. Self-invocation self invocation – call the async method in the same class  – does not work

The reason for this is simple, because the method must be public to become a proxy, and self-calling does not work because it bypasses the proxy and calls the method directly .

There is no return type Method  Methods with void Return Type

A method with a return type of void works asynchronously with the following simple configuration:

@Async
public void asyncMethodWithVoidReturnType() {
    System.out.println("Execute method asynchronously. " + Thread.currentThread().getName());
}

Methods that have a return type  Methods with Return Type

@Async  can be applied to a method with return type: by putting the actual return value in the Future object.

@Async
public Future<String> asyncMethodWithReturnType() {
    System.out.println("Execute method asynchronously - " + Thread.currentThread().getName());
    try {
        Thread.sleep(5000);
        return new AsyncResult<String>("hello world !!!!");
    } catch (InterruptedException e) {
        //
    }
    return null;
}
Spring also  provides an AsyncResult class that implements Future ,  which is used to get the results of asynchronous method execution.

Now let's call the above method and use the Future object to get the result of the asynchronous processing .

public void testAsyncAnnotationForMethodsWithReturnType()
    throws InterruptedException, ExecutionException {
    System.out.println("Invoking an asynchronous method. "
      + Thread.currentThread().getName());
    Future<String> future = asyncAnnotationExample.asyncMethodWithReturnType();

    while (true) {
        if (future.isDone()) {
            System.out.println("Result from asynchronous process - " + future.get());
            break;
        }
        System.out.println("Continue doing something else. ");
        Thread.sleep(1000);
    }
}

4.  The Executor

Spring uses SimpleAsyncTaskExecutor  by default to execute real methods asynchronously. The default setting can be overridden in two levels-application level or private method level.

To override a method-level practitioners Override the Executor at the Method Level

You need to declare the necessary executors in your configuration class:


@Configuration
@EnableAsync
public class SpringAsyncConfig {
   
    @Bean(name = "threadPoolTaskExecutor")
    public Executor threadPoolTaskExecutor() {
        return new ThreadPoolTaskExecutor();
    }
}
After that, you must provide the executor name as an attribute value in @Async :

@Async("threadPoolTaskExecutor")
public void asyncMethodWithConfiguredExecutor() {
    System.out.println("Execute method with configured executor - "  + Thread.currentThread().getName());
}

To override the application level practitioners to  Override the Executor at the Application Level
In this case, the configuration class  must implement the AsyncConfigurer interface-which  means that it must implement the getAsyncExecutor () method. Here we will return the executor for the entire application-this is now the default executor for executing methods annotated with @Async.

@Configuration
@EnableAsync
public class SpringAsyncConfig implements AsyncConfigurer {
   
    @Override
    public Executor getAsyncExecutor() {
        return new ThreadPoolTaskExecutor();
    }
   
}

5. Exception Handling

If the method's return type is Future, exception handling is easy-the Future.get () method throws an exception.

However, when the return type is void, the exception will not be delivered to the calling thread. Therefore, we need additional settings for exception handling.

We will  create a custom asynchronous exception handler by implementing the AsyncUncaughtExceptionHandler interface. The handleUncaughtException ()  method is  called when an uncaught asynchronous exception is caught :

public class CustomAsyncExceptionHandler  implements AsyncUncaughtExceptionHandler {

    @Override
    public void handleUncaughtException(Throwable throwable, Method method, Object... obj) {
        System.out.println("Exception message - " + throwable.getMessage());
        System.out.println("Method name - " + method.getName());
        for (Object param : obj) {
            System.out.println("Parameter value - " + param);
        }
    }
   
}
In the previous section, we  saw the AsyncConfigurer interface implemented by the configuration class  . As part of that, we also need to override the getAsyncUncaughtExceptionHandler () method, which returns our custom asynchronous exception handler  :

@Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
    return new CustomAsyncExceptionHandler();
}

6.  Conclusion

In this tutorial, we looked at how to run asynchronous code in Spring. Not only did it work with very basic settings and annotations, we also looked at more advanced settings such as the executor we set, or the exception handling strategy.

Spring @Async and @EnableAsync annotation Example

Spring @Async and @EnableAsync annotation example

@EnableAsync annotation

Make the Application Executable, The @EnableAsync annotation switches on Spring’s ability to run @Async methods in a background thread pool. This class also customizes the Executor by defining a new bean.

Spring’s @Async annotation works with web applications, but you need not set up a web container to see its benefits.

The following listing (from src/main/java/com/example/asyncmethod/AsyncMethodApplication.java) shows how to do so:

package com.example.asyncmethod;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.scheduling.annotation.EnableAsync;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;

import java.util.concurrent.Executor;

@SpringBootApplication
@EnableAsync
public class AsyncMethodApplication {

  public static void main(String[] args) {
    // close the application context to shut down the custom ExecutorService
    SpringApplication.run(AsyncMethodApplication.class, args).close();
  }

  @Bean
  public Executor taskExecutor() {
    ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
    executor.setCorePoolSize(2);
    executor.setMaxPoolSize(2);
    executor.setQueueCapacity(500);
    executor.setThreadNamePrefix("GithubLookup-");
    executor.initialize();
    return executor;
  }
}

@Async annotation


Spring’s @Async annotation, indicating that it should run on a separate thread.

Next, you need to create a service that queries to find user information. The following listing (from src/main/java/com/example/asyncmethod/GitHubLookupService.java) shows how to do so:

package com.example.asyncmethod;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;
import org.springframework.web.client.RestTemplate;

import java.util.concurrent.CompletableFuture;

@Service
public class GitHubLookupService {

  private static final Logger logger = LoggerFactory.getLogger(GitHubLookupService.class);

  private final RestTemplate restTemplate;

  public GitHubLookupService(RestTemplateBuilder restTemplateBuilder) {
    this.restTemplate = restTemplateBuilder.build();
  }

  @Async
  public CompletableFuture<User> findUser(String user) throws InterruptedException {
    logger.info("Looking up " + user);
    String url = String.format("https://api.github.com/users/%s", user);
    User results = restTemplate.getForObject(url, User.class);
    // Artificial delay of 1s for demonstration purposes
    Thread.sleep(1000L);
    return CompletableFuture.completedFuture(results);
  }

}

Spring @JsonIgnoreProperties annotation Example

Spring @JsonIgnoreProperties annotation


The @JsonIgnoreProperties annotation tells Spring to ignore any attributes not listed in the class. This makes it easy to make REST calls and produce domain objects.

package com.example.asyncmethod;

import com.fasterxml.jackson.annotation.JsonIgnoreProperties;

@JsonIgnoreProperties(ignoreUnknown=true)
public class User {

  private String name;
  private String blog;

  public String getName() {
    return name;
  }

  public void setName(String name) {
    this.name = name;
  }

  public String getBlog() {
    return blog;
  }

  public void setBlog(String blog) {
    this.blog = blog;
  }

  @Override
  public String toString() {
    return "User [name=" + name + ", blog=" + blog + "]";
  }

}

Python - http — HTTP modules

http — HTTP modules

http is a package that collects several modules for working with the HyperText Transfer Protocol:

  • http.client is a low-level HTTP protocol client; for high-level URL opening use urllib.request
  • http.server contains basic HTTP server classes based on socketserver
  • http.cookies has utilities for implementing state management with cookies
  • http.cookiejar provides persistence of cookies


http.client — HTTP protocol client


This module defines classes which implement the client side of the HTTP and HTTPS protocols. It is normally not used directly — the module urllib.request uses it to handle URLs that use HTTP and HTTPS.

class http.client.HTTPConnection(host, port=None, [timeout, ]source_address=None, blocksize=8192)

class http.client.HTTPSConnection(host, port=None, key_file=None, cert_file=None, [timeout, ]source_address=None, *, context=None, check_hostname=None, blocksize=8192)

For example, the following calls all create instances that connect to the server at the same host and port:

>>>
>>> h1 = http.client.HTTPConnection('www.python.org')
>>> h2 = http.client.HTTPConnection('www.python.org:80')
>>> h3 = http.client.HTTPConnection('www.python.org', 80)
>>> h4 = http.client.HTTPConnection('www.python.org', 80, timeout=10)


>>> import http.client
>>> conn = http.client.HTTPSConnection("localhost", 8080)
>>> conn.set_tunnel("www.python.org")
>>> conn.request("HEAD","/index.html")

http.server — HTTP servers


This module defines classes for implementing HTTP servers (Web servers).
http.server is not recommended for production. It only implements basic security checks.

def run(server_class=HTTPServer, handler_class=BaseHTTPRequestHandler):
    server_address = ('', 8000)
    httpd = server_class(server_address, handler_class)
    httpd.serve_forever()

import socketserver

PORT = 8000

Handler = http.server.SimpleHTTPRequestHandler

with socketserver.TCPServer(("", PORT), Handler) as httpd:
    print("serving at port", PORT)
    httpd.serve_forever()

Python - concurrent.futures — Launching parallel tasks

Python - concurrent.futures — Launching parallel tasks


The concurrent.futures module provides a high-level interface for asynchronously executing callables.

The asynchronous execution can be performed with threads, using ThreadPoolExecutor, or separate processes, using ProcessPoolExecutor. Both implement the same interface, which is defined by the abstract Executor class.

Executor Objects

class concurrent.futures.Executor
An abstract class that provides methods to execute calls asynchronously. It should not be used directly, but through its concrete subclasses.

submit(fn, /, *args, **kwargs)
Schedules the callable, fn, to be executed as fn(*args **kwargs) and returns a Future object representing the execution of the callable.

with ThreadPoolExecutor(max_workers=1) as executor:
    future = executor.submit(pow, 323, 1235)
    print(future.result())
map(func, *iterables, timeout=None, chunksize=1)
Similar to map(func, *iterables) except:

the iterables are collected immediately rather than lazily;

func is executed asynchronously and several calls to func may be made concurrently.

The returned iterator raises a concurrent.futures.TimeoutError if __next__() is called and the result isn’t available after timeout seconds from the original call to Executor.map(). timeout can be an int or a float. If timeout is not specified or None, there is no limit to the wait time.

If a func call raises an exception, then that exception will be raised when its value is retrieved from the iterator.

When using ProcessPoolExecutor, this method chops iterables into a number of chunks which it submits to the pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer. For very long iterables, using a large value for chunksize can significantly improve performance compared to the default size of 1. With ThreadPoolExecutor, chunksize has no effect.

Changed in version 3.5: Added the chunksize argument.

shutdown(wait=True, *, cancel_futures=False)
Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to Executor.submit() and Executor.map() made after shutdown will raise RuntimeError.

If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If wait is False then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing.

If cancel_futures is True, this method will cancel all pending futures that the executor has not started running. Any futures that are completed or running won’t be cancelled, regardless of the value of cancel_futures.

If both cancel_futures and wait are True, all futures that the executor has started running will be completed prior to this method returning. The remaining futures are cancelled.

You can avoid having to call this method explicitly if you use the with statement, which will shutdown the Executor (waiting as if Executor.shutdown() were called with wait set to True):

import shutil
with ThreadPoolExecutor(max_workers=4) as e:
    e.submit(shutil.copy, 'src1.txt', 'dest1.txt')
    e.submit(shutil.copy, 'src2.txt', 'dest2.txt')
    e.submit(shutil.copy, 'src3.txt', 'dest3.txt')
    e.submit(shutil.copy, 'src4.txt', 'dest4.txt')
Changed in version 3.9: Added cancel_futures.

ThreadPoolExecutor
ThreadPoolExecutor is an Executor subclass that uses a pool of threads to execute calls asynchronously.

Deadlocks can occur when the callable associated with a Future waits on the results of another Future. For example:

import time
def wait_on_b():
    time.sleep(5)
    print(b.result())  # b will never complete because it is waiting on a.
    return 5

def wait_on_a():
    time.sleep(5)
    print(a.result())  # a will never complete because it is waiting on b.
    return 6


executor = ThreadPoolExecutor(max_workers=2)
a = executor.submit(wait_on_b)
b = executor.submit(wait_on_a)
And:

def wait_on_future():
    f = executor.submit(pow, 5, 2)
    # This will never complete because there is only one worker thread and
    # it is executing this function.
    print(f.result())

executor = ThreadPoolExecutor(max_workers=1)
executor.submit(wait_on_future)
class concurrent.futures.ThreadPoolExecutor(max_workers=None, thread_name_prefix='', initializer=None, initargs=())
An Executor subclass that uses a pool of at most max_workers threads to execute calls asynchronously.

initializer is an optional callable that is called at the start of each worker thread; initargs is a tuple of arguments passed to the initializer. Should initializer raise an exception, all currently pending jobs will raise a BrokenThreadPool, as well as any attempt to submit more jobs to the pool.

Changed in version 3.5: If max_workers is None or not given, it will default to the number of processors on the machine, multiplied by 5, assuming that ThreadPoolExecutor is often used to overlap I/O instead of CPU work and the number of workers should be higher than the number of workers for ProcessPoolExecutor.

New in version 3.6: The thread_name_prefix argument was added to allow users to control the threading.Thread names for worker threads created by the pool for easier debugging.

Changed in version 3.7: Added the initializer and initargs arguments.

Changed in version 3.8: Default value of max_workers is changed to min(32, os.cpu_count() + 4). This default value preserves at least 5 workers for I/O bound tasks. It utilizes at most 32 CPU cores for CPU bound tasks which release the GIL. And it avoids using very large resources implicitly on many-core machines.

ThreadPoolExecutor now reuses idle worker threads before starting max_workers worker threads too.

ThreadPoolExecutor Example
import concurrent.futures
import urllib.request

URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print('%r generated an exception: %s' % (url, exc))
        else:
            print('%r page is %d bytes' % (url, len(data)))
ProcessPoolExecutor
The ProcessPoolExecutor class is an Executor subclass that uses a pool of processes to execute calls asynchronously. ProcessPoolExecutor uses the multiprocessing module, which allows it to side-step the Global Interpreter Lock but also means that only picklable objects can be executed and returned.

The __main__ module must be importable by worker subprocesses. This means that ProcessPoolExecutor will not work in the interactive interpreter.

Calling Executor or Future methods from a callable submitted to a ProcessPoolExecutor will result in deadlock.

class concurrent.futures.ProcessPoolExecutor(max_workers=None, mp_context=None, initializer=None, initargs=())
An Executor subclass that executes calls asynchronously using a pool of at most max_workers processes. If max_workers is None or not given, it will default to the number of processors on the machine. If max_workers is lower or equal to 0, then a ValueError will be raised. On Windows, max_workers must be equal or lower than 61. If it is not then ValueError will be raised. If max_workers is None, then the default chosen will be at most 61, even if more processors are available. mp_context can be a multiprocessing context or None. It will be used to launch the workers. If mp_context is None or not given, the default multiprocessing context is used.

initializer is an optional callable that is called at the start of each worker process; initargs is a tuple of arguments passed to the initializer. Should initializer raise an exception, all currently pending jobs will raise a BrokenProcessPool, as well any attempt to submit more jobs to the pool.

Changed in version 3.3: When one of the worker processes terminates abruptly, a BrokenProcessPool error is now raised. Previously, behaviour was undefined but operations on the executor or its futures would often freeze or deadlock.

Changed in version 3.7: The mp_context argument was added to allow users to control the start_method for worker processes created by the pool.

Added the initializer and initargs arguments.

ProcessPoolExecutor Example
import concurrent.futures
import math

PRIMES = [
    112272535095293,
    112582705942171,
    112272535095293,
    115280095190773,
    115797848077099,
    1099726899285419]

def is_prime(n):
    if n < 2:
        return False
    if n == 2:
        return True
    if n % 2 == 0:
        return False

    sqrt_n = int(math.floor(math.sqrt(n)))
    for i in range(3, sqrt_n + 1, 2):
        if n % i == 0:
            return False
    return True

def main():
    with concurrent.futures.ProcessPoolExecutor() as executor:
        for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
            print('%d is prime: %s' % (number, prime))

if __name__ == '__main__':
    main()
Future Objects
The Future class encapsulates the asynchronous execution of a callable. Future instances are created by Executor.submit().

class concurrent.futures.Future
Encapsulates the asynchronous execution of a callable. Future instances are created by Executor.submit() and should not be created directly except for testing.

cancel()
Attempt to cancel the call. If the call is currently being executed or finished running and cannot be cancelled then the method will return False, otherwise the call will be cancelled and the method will return True.

cancelled()
Return True if the call was successfully cancelled.

running()
Return True if the call is currently being executed and cannot be cancelled.

done()
Return True if the call was successfully cancelled or finished running.

result(timeout=None)
Return the value returned by the call. If the call hasn’t yet completed then this method will wait up to timeout seconds. If the call hasn’t completed in timeout seconds, then a concurrent.futures.TimeoutError will be raised. timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time.

If the future is cancelled before completing then CancelledError will be raised.

If the call raised, this method will raise the same exception.

exception(timeout=None)
Return the exception raised by the call. If the call hasn’t yet completed then this method will wait up to timeout seconds. If the call hasn’t completed in timeout seconds, then a concurrent.futures.TimeoutError will be raised. timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time.

If the future is cancelled before completing then CancelledError will be raised.

If the call completed without raising, None is returned.

add_done_callback(fn)
Attaches the callable fn to the future. fn will be called, with the future as its only argument, when the future is cancelled or finishes running.

Added callables are called in the order that they were added and are always called in a thread belonging to the process that added them. If the callable raises an Exception subclass, it will be logged and ignored. If the callable raises a BaseException subclass, the behavior is undefined.

If the future has already completed or been cancelled, fn will be called immediately.

The following Future methods are meant for use in unit tests and Executor implementations.

set_running_or_notify_cancel()
This method should only be called by Executor implementations before executing the work associated with the Future and by unit tests.

If the method returns False then the Future was cancelled, i.e. Future.cancel() was called and returned True. Any threads waiting on the Future completing (i.e. through as_completed() or wait()) will be woken up.

If the method returns True then the Future was not cancelled and has been put in the running state, i.e. calls to Future.running() will return True.

This method can only be called once and cannot be called after Future.set_result() or Future.set_exception() have been called.

set_result(result)
Sets the result of the work associated with the Future to result.

This method should only be used by Executor implementations and unit tests.

Changed in version 3.8: This method raises concurrent.futures.InvalidStateError if the Future is already done.

set_exception(exception)
Sets the result of the work associated with the Future to the Exception exception.

This method should only be used by Executor implementations and unit tests.

Changed in version 3.8: This method raises concurrent.futures.InvalidStateError if the Future is already done.

Module Functions
concurrent.futures.wait(fs, timeout=None, return_when=ALL_COMPLETED)
Wait for the Future instances (possibly created by different Executor instances) given by fs to complete. Returns a named 2-tuple of sets. The first set, named done, contains the futures that completed (finished or cancelled futures) before the wait completed. The second set, named not_done, contains the futures that did not complete (pending or running futures).

timeout can be used to control the maximum number of seconds to wait before returning. timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time.

return_when indicates when this function should return. It must be one of the following constants:

Constant

Description

FIRST_COMPLETED

The function will return when any future finishes or is cancelled.

FIRST_EXCEPTION

The function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to ALL_COMPLETED.

ALL_COMPLETED

The function will return when all futures finish or are cancelled.

concurrent.futures.as_completed(fs, timeout=None)
Returns an iterator over the Future instances (possibly created by different Executor instances) given by fs that yields futures as they complete (finished or cancelled futures). Any futures given by fs that are duplicated will be returned once. Any futures that completed before as_completed() is called will be yielded first. The returned iterator raises a concurrent.futures.TimeoutError if __next__() is called and the result isn’t available after timeout seconds from the original call to as_completed(). timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time.

See also
PEP 3148 – futures - execute computations asynchronously
The proposal which described this feature for inclusion in the Python standard library.

Exception classes
exception concurrent.futures.CancelledError
Raised when a future is cancelled.

exception concurrent.futures.TimeoutError
Raised when a future operation exceeds the given timeout.

exception concurrent.futures.BrokenExecutor
Derived from RuntimeError, this exception class is raised when an executor is broken for some reason, and cannot be used to submit or execute new tasks.

New in version 3.7.

exception concurrent.futures.InvalidStateError
Raised when an operation is performed on a future that is not allowed in the current state.

New in version 3.8.

exception concurrent.futures.thread.BrokenThreadPool
Derived from BrokenExecutor, this exception class is raised when one of the workers of a ThreadPoolExecutor has failed initializing.

New in version 3.7.

exception concurrent.futures.process.BrokenProcessPool
Derived from BrokenExecutor (formerly RuntimeError), this exception class is raised when one of the workers of a ProcessPoolExecutor has terminated in a non-clean fashion (for example, if it was killed from the outside).