2020-02-26

ReentrantLock vs Synchronized - Performance comparison

Recent experience performance problems caused by high concurrency, final positioning of the problem is that LinkedBlockingQueuethe performance is not, ultimately to reduce the competitive pressure of each Queue by creating multiple Queue. For the first time in my life, I encountered a situation where the JDK's own data structure could not meet the needs, and I was determined to study why.

The pressure test is on a 40-core machine. Tomcat defaults to 200 threads. The sender sends 500 requests at about 1w QPS and requires a response of 999 quantiles at about 50ms. There is a task that writes to the database asynchronously in the code. In actual testing, more than 60% of the delay is in the write queue (in fact, the task is submitted to the ThreadPool). He began research LinkedBlockingQueueimplementation.

Equivalent plus LinkedList LinkedBlockingQueue ordinary ReentrantLocklocking during operation. ReentrantLock (and other locks in Java) internally rely on CAS to achieve atomicity. However, in high concurrency, the thread will retry continuously, so theoretically the performance will be worse than the native lock.


Tests and results

It is actually very difficult to compare CAS with native locks. There are no native Java locks, and synchronizeda variety of JDK optimization, in some cases, low concurrency also used CAS. He has been compared synchronizedand Unsafe.compareAndSwapIntfound that CAS is suspended or beaten. So finally the second best contrast ReentrantLockand Synchronizedperformance.

A thread contention ReentrantLockfails, it will be put on the waiting column, we will not participate in the follow-up of the competition, and therefore not representative of CAS ReentrantLock performance under high concurrency. However, we generally don't use CAS directly, so the test results are okay.

The test uses the JMH framework, which claims to measure to the millisecond level. The running machine is 40 cores, so it can guarantee at least 40 threads competing at the same time (if the number of CPU cores is insufficient, despite the number of threads, the amount of true simultaneous concurrency may not be much). Tested under JDK 1.8.

Increment operation

First, a test synchronizedwith ReentrantLockthe synchronization increment operator, test code is as follows:

@Benchmark
@Group ( "lock" )
@GroupThreads ( 4 )
public  void  lockedOp ()  { try {         lock.lock ();         lockCounter ++;     } finally {         lock.unlock ();     } }
 

@Benchmark
@Group ( "synchronized" )
@GroupThreads ( 4 )
public  void  synchronizedOp ()  { synchronized ( this ) {         rawCounter ++;     } }
 

The results are as follows:



List operation

The CPU time of the auto-increment operation is too short. Increase the time of each operation appropriately and insert a data into linkedList instead. code show as below:

@Benchmark
@Group ( "lock" )
@GroupThreads ( 2 )
public  void  lockedOp ()  { try {         lock.lock ();         lockQueue.add ( "event" ); if (lockQueue.size ()> = CLEAR_COUNT) {             lockQueue .clear ();         }     } finally {         lock.unlock ();     } }



@Benchmark
@Group ( "synchronized" )
@GroupThreads ( 2 )
public  void  synchronizedOp ()  { synchronized ( this ) {         rawQueue.add ( "event" ); if (rawQueue.size ()> = CLEAR_COUNT) {             rawQueue.clear ( );         }     } }
 
The results are as follows:

Result analysis

You can see that the performance of ReentrantLock is still higher than Synchronized.
The throughput is the lowest when 2 threads are used, while 3 threads are improved. It is speculated that when two threads compete, thread scheduling must occur, and when multiple threads (unfairly) compete, some threads are directly available from the current The thread took the lock in hand.
As the number of threads increases, the throughput decreases only slightly. First of all, it is speculated that because the synchronization code only has at most one thread executing, although the number of threads increases, the throughput will not increase much. Secondly, most threads are less likely to be awakened after they become waiting, so they are less likely to participate in subsequent competition.
(In linkedlist test) After the lock holding time increases, the throughput gap between ReentrantLock and Synchronized decreases, which should be able to prove that the cost of CAS thread retry is increasing.
This test gives me more confidence in ReentrantLock, but it is generally recommended to use synchronized during development. After all, the big brothers are still optimizing (see an article saying that Lock and synchronized in JDK 9 are basically the same).

HTTP keep-alive

HTTP keep-alive is also called HTTP persistent connection. It reduces the overhead of creating / closing multiple TCP connections by reusing one TCP connection to send / receive multiple HTTP requests.

What is keep-alive?

Keep-alive is a convention between the client and the server. If keep-alive is enabled, the server does not close the TCP connection after returning the response. Similarly, after receiving the response message, the client does not close the connection and sends the next The connection is reused when an HTTP request is made.

In the HTTP / 1.0 protocol, if the request header contains:

Connection: keep-alive
It means that keep-alive is turned on, and the return header of the server will also contain the same content.

In the HTTP / 1.1 protocol, keep-alive is turned on by default, unless it is explicitly turned off:

Connection: close

The purpose of the keep-alive technology is to reuse the same TCP connection between multiple HTTPs, thereby reducing the overhead of creating / closing multiple TCP connections (including response time, CPU resources, reducing congestion, etc.)


However, there is no free lunch in the world. If the client has not closed the connection after receiving all the information, the corresponding resources of the server are still occupied (though it is useless). For example, in Tomcat's BIO implementation, an unclosed connection will occupy the corresponding processing thread. If a long connection is actually processed, but the closed timeout period has not expired, the thread will always be occupied. problem).

Obviously, if the client and server do need to communicate multiple times, enabling keep-alive is a better choice. For example, in a microservice architecture, the user and provider of a microservice usually have long-term communication. Okay, keep-alive.

In some REST services with high TPS / QPS, if a short connection is used (that is, keep-alive is not enabled), it is likely that the client port is full. This is because a large number of TCP connections will be created in a short period of time, and after the four wave of TCP, the client's port will be in TIME_WAIT for a period of time (2 * MSL), during which the port will not be released, resulting in the port being full. In this case it is best to use long connections.

How does the client start?

Almost all the tools we use now have long connections turned on by default:

For browsers, almost all browsers you use today (including IE6) use keep-alive by default.
The Java8 HttpURLConnectionenabled by default long connection, but just leave the default connection pool five long connection 1 , if at the same time more than five threads in use, it will create a new connection, after the end of more than five parts will be active client shut down.
HttpClientBy default, Apache reserves 2 long connections for each address, and a maximum of 20 connections are reserved in the connection pool2 .
Python requests using sessions will enable long connections by default.
Here are some code notes:

Feign uses HttpClient connection pool example
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager (); 
connectionManager.setMaxTotal (maxConnections); 
connectionManager.setDefaultMaxPerRoute (maxConnectionsPerRoute);

CloseableHttpClient httpClient = HttpClients 
    .custom () 
    .setConnectionManager (connectionManager) 
    .build ();

return Feign.builder () 
        .client ( new new ApacheHttpClient (httpClient)) 
        .options ( new new the Options (connectTimeoutMills, readTimeoutMills)) 
        .retryer ( new new the Default (Retryperiod, retryMaxPeriod, retryMaxAttempts)) 
        .encoder ( new new JacksonEncoder (JsonUtil.getObjectMapper () )) 
        .decoder ( new JacksonDecoder (JsonUtil.getObjectMapper ())) 
        .decode404 () 
        .target (PredictorFeignService . class , endpoint ) ;

How to implement the server

Different servers implement keep-alive in different ways. Even in different working modes of tomcat, the processing methods are different. Here is roughly the processing logic in the NIO mode (tomcat 9.0.22):

In NioEndpoint#SocketProcessorclass, just off the internal state CLOSEDof the port:

if (state == SocketState.CLOSED) { 
    poller.cancelledKey (key, this .socketWrapper); 
}
In the Http11Processor#servicemethod, if the connection keep-alive, the final internal state would beOPEN

} else  if ( this .openSocket) { return this .readComplete? SocketState.OPEN: SocketState.LONG; } else {
     

Reserved connection, after a timeout period, will NioEndpoint#Poller#timeoutbe shut down process:

} else  if (! NioEndpoint. this .processSocket (socketWrapper, SocketEvent.ERROR, true )) { this .cancelledKey (key, socketWrapper); }
    

Further, if the spring boot, you can server.connection-timeoutbe adjusted retention time of the keep-alive connections CI, if the server itself is not provided, compared with each default configuration, Tomcat default 60s .

Spring Security - API token authentication implementation

Common permission authentication is done by providing a "username password". There are some APIs in the business, and we want to verify them in the form of API Tokens. For example, adding a token /api?token=xxxxto the URL allows API access. The logic behind this design is that usernames and passwords have higher permissions, and API tokens can only give permissions to a certain subsystem.

Spring Security 

Java Servlet and Spring Security uses a design pattern Chain of Responsibility pattern . Simply put, they all define many filters, and each request will be processed by layers of filters and finally returned.


Spring Security registers a filter in the filter chain of the servlet FilterChainProxy, which will proxy requests to multiple filter chains maintained by Spring Security itself, and each filter chain will match some URLs, as shown in the figure /foo/**, If it matches, the corresponding filter is executed. Filter chains are sequential, and a request will only execute the first matching filter chain. The configuration of Spring Security is essentially adding, deleting, and modifying filters. The figure is arranged http.formLogin()a filter chain:

You can see the default filter contains a lot of content, such as CsrfFilterto generate and verify CSRF Token, UsernamePasswordAuthenticationFilterto handle username and password authentication, and SessionManagementFilterto manage the Session, and so on. The "authorization authentication" we care about is actually divided into two parts:

Authentication: It means "you are you". If the username and password match, the operator is considered to be the user.
Authorization (Authorization): That is to say, "Do you qualify?" For example, the "Delete" function is only allowed for administrators.

Authentication

Take username and password as an example. To authenticate whether a user is a system user, we need two steps:

One extracts authentication information such as username and password information from the requested message. Authentication information need to implement Authenticationthe interface.
The other is used to verify that the authentication information is correct, such as whether the password is correct and the API token is correct.
In addition, it is determined whether the user is qualified to access a URL, which belongs to authorization.
User authentication password generally require custom logic and often complex, Spring Security is AuthenticationManagerdefined validation interface:

public interface AuthenticationManager {
    Authentication authenticate(Authentication authentication) throws AuthenticationException;
}

If authentication is passed, return authentication information (such as authentication information after erasing password)
If the authentication fails, throw AuthenticationExceptionan exception.
If it cannot be decided, returns null.
The most commonly used internal implementation of Spring Security is ProviderManager, and it uses an authentication chain internally, which contains multiple AuthenticationProvier, ProviderManagerwill be called one by one until a provider returns successfully.

public interface AuthenticationProvider {
    Authentication authenticate(Authentication authentication) throws AuthenticationException;
    boolean supports(Class<?> authentication);
}

The AuthenticationManagerdifference is that it is more a supportsmethod used to determine whether the Provider supports the current authentication information. For example, an API token authenticator does not support authentication information for username and password.

In addition, the ProviderManager also defines a parent-child relationship. If all the Provers in the ProviderManager cannot authenticate a certain information, it will let the parent ProviderManager determine. As shown in the figure: 

In theory, we don't need to understand these things, we can write a filter to handle all the requirements ourselves. If you only use this interface will be able to enjoy some "infrastructure" Spring Security, such as throwing AuthenticationException, the ExceptionTranslationFiltercalls configured authenticationEntryPoint.commence()methods for processing and returns 401 and so on.


To determine "You have not qualified", we must first know information about "you", that is to say before a section of Authenticationthe interface; secondly need to know the allocation of resources and access to resources, such as access to URL, which can be What role access. Similarly, Spring Security has defined the relevant interface, authorization will FilterSecurityInterceptorstart in.

public  interface  AccessDecisionManager  {

    void decide(Authentication authentication,
                Object object,
                Collection<ConfigAttribute> configAttributes)
            throws AccessDeniedException, InsufficientAuthenticationException;

    boolean supports(ConfigAttribute attribute);

    boolean supports(Class<?> clazz);
}

Function decidewill determine whether authorization is successful, if the permissions you throw AccessDeniedExceptionan exception. Function parameter description:

authentication Represents "authentication information", from which information such as the role of the current user can be obtained
object The resource to be accessed, such as a URL or a function
configAttributesRepresents the configuration of the resource, such as the URL can only be accessed by the "Administrator" role ( ROLE_ADMIN).
Spring Security, a specific authorization policy is "voting mechanism", each AccessDecisionVoter able to vote, and finally how the statistical results from AccessDecisionManagerthe specific implementation decisions. As AffirmativeBasedjust need someone to agree to; ConsensusBasedrequired majority in favor; UnanimousBasedthe needs of all in favor. Used by default AffirmativeBased.

Like Authentication, following this set of logic, Spring Security's default configuration can reduce our workload. For example, the voting mechanism mentioned above, as well as processing such as returning 403 when throwing an AccessDeniedException.

Configuration

The working principle of Spring Security is not difficult to understand, but how to achieve the desired configuration has always been a pain point in my studies. Here is only a brief explanation, the specific configuration can not be explained in a few words. Here is a simple example to illustrate some correspondences:

@Configuration
@Order(1)
public class TokenSecurityConfig extends WebSecurityConfigurerAdapter { // ①

    // ②
    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth.authenticationProvider(new TokenAuthenticationProvider(tokenService));
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
                .antMatcher("/api/v1/square/**") // ③
                .addFilterAfter(new TokenAuthenticationFilter(), BasicAuthenticationFilter.class) // ④
                .authorizeRequests()
                .anyRequest().hasRole("API"); // ⑤
    }
}

Inheritance WebSecurityConfigurerAdapterbegins. Spring Security previously mentioned filter may comprise a plurality of chains, each WebSecurityConfigurerAdaptercorresponding to a filter chain. ③ specified URL pattern to match, by the order @Orderspecified.
Overload configure(AuthenticationManagerBuilder auth)method for authentication logic configuration, a WebSecurityConfigurerAdapterconfiguration will generate a ProviderManager, and this configuremethod may provide a plurality AuthenticationProvier.
Specifies the URL pattern to be matched by the current filter chain. Used antMatcherto specify a mode, requestMatcheror requestMatchersto perform advanced configuration, such as specifying a plurality of modes.
By addFiltercan add filters in the current filter chain related methods, but it seems there is no way to delete.
hasRoleEtc. are used to specify the "authorization" of logic, such as the bank said URL access all need API role.

API Token implementation

To achieve the authorization authentication of the API Token mentioned at the beginning, we need the following things:

A Authenticationrealization for storing authentication token related information.
A filter to extract the token information in the request
A AuthenticationProvierused to confirm the authentication token information is correct.
When authentication fails, we want to return a custom error message, so we need a filter.
Certification Information
Since the API token only needs to store the token itself, the implementation is as follows:

public class TokenAuthentication implements Authentication {
    private String token;

    private TokenAuthentication(String token) {
        this.token = token;
    }

    @Override
    public Object getCredentials() {
        return token;
    }

    // ... omit other methods
 }

Filter for extracting tokens

Because the token information is specified in the URL, this filter reads the parameters in the URL and generates the definition in the previous section TokenAuthentication:

public  class  TokenAuthenticationFilter  extends  OncePerRequestFilter  { // ①

    @Override
    protected void doFilterInternal(HttpServletRequest req, HttpServletResponse res, FilterChain fc)
            throws ServletException, IOException {

        SecurityContext context = SecurityContextHolder.getContext();
        if (context.getAuthentication() != null && context.getAuthentication().isAuthenticated()) {
            // do nothing
        } else {
            // ②
            Map<String, String[]> params = req.getParameterMap();
            if (!params.isEmpty() && params.containsKey("token")) {
                String token = params.get("token")[0];
                if (token != null) {
                    Authentication auth = new TokenAuthentication(token);
                    SecurityContextHolder.getContext().setAuthentication(auth);
                }
            }
            req.setAttribute("me.lotabout.springsecurityexample.security.TokenAuthenticationFilter.FILTERED", true); //③
        }

        fc.doFilter(req, res); //④
    }
}
① inherited from OncePerRequestFilterno particular intention, its function is to prevent the filter is called multiple times
② Obtain the token in the URL and store the generated Authentication in the SecurityContext for subsequent logic use
After the attribute is set in ③, the filter will not be called again
④ execute the following filter

The above will get the Token in the URL. We need to compare it with the token in the database to see if it is consistent. Here we use the comparison in memory instead:

public class TokenAuthenticationProvider implements AuthenticationProvider {

    @Override
    public Authentication authenticate(Authentication authentication) throws AuthenticationException {

        if (authentication.isAuthenticated()) {
            return authentication;
        }

        // 从 TokenAuthentication 中获取 token
        String token = authentication.getCredentials().toString();
        if (Strings.isNullOrEmpty(token)) {
            return authentication;
        }

        if (!token.equals("abcdefg")) {
            throw ResultException.of(MyError.TOKEN_NOT_FOUND).errorData(token);
        }

        User user = User.builder()
                    .username("api")
                    .password("")
                    .authorities(Role.API)
                    .build();

        // Return the new authentication information, with the token and         undetected user information
         Authentication auth = new PreAuthenticatedAuthenticationToken (user, token, user.getAuthorities ()); 
auth.setAuthenticated ( true ); return auth;     }
        


    @Override
    public boolean supports(Class<?> aClass) {
        return (TokenAuthenticationFilter.TokenAuthentication.class.isAssignableFrom(aClass));
    }
}


Error handling

We hope that when the error, return a 200 status code, while the body contains "success": falseand specific error messages.

public class ResultExceptionTranslationFilter extends GenericFilterBean {

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain fc) throws IOException, ServletException {
        try {
            fc.doFilter(request, response);
        } catch (ResultException ex) {
            response.setContentType("application/json; charset=UTF-8");
            response.setCharacterEncoding("UTF-8");
            response.getWriter().println(JsonUtil.toJson(Response.of(ex)));
            response.getWriter().flush();
        }
    }
}

Assembly configuration

The specific configuration is similar to that mentioned above. Note that we also closed CSRF and Session.

@Configuration
@Order(1)
public class PredictorSecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth.authenticationProvider(new TokenAuthenticationProvider(tokenService));
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
                .antMatcher(PATTERN_SQUARE)
                .addFilterAfter(new TokenAuthenticationFilter(), BasicAuthenticationFilter.class)
                .addFilterAfter(new ResultExceptionTranslationFilter(), ExceptionTranslationFilter.class)
                .authorizeRequests()
                .anyRequest().hasRole("API")
                .and()
                .csrf()
                .disable()
                .sessionManagement()
                .sessionCreationPolicy(SessionCreationPolicy.STATELESS);
    }
}
The complete code can be found in the Spring Security Example .

2020-02-22

Java - ZGC (A Scalable Low-Latency Garbage Collector)

The Z Garbage Collector, also known as ZGC, is a scalable low latency garbage collector.

It is designed to meet the following goals:

  • Pause times do not exceed 10 ms
  • Pause times do not increase with the heap or live-set size
  • Handle heaps ranging from a few hundred megabytes to multi terabytes in size

ZGC is a concurrent garbage collector, meaning that all heavy lifting work (marking, compaction, reference processing, string table cleaning, etc) is done while Java threads continue to execute. This greatly limits the negative impact that garbage collection has on application response times.

ZGC is included as an experimental feature. To enable it, the -XX:+UnlockExperimentalVMOptions option will therefore need to be used in combination with the -XX:+UseZGC option.

This experimental version of ZGC has the following limitations:
  • It is only available on Linux/x64.
  • Using compressed oops and/or compressed class points is not supported. The -XX:+UseCompressedOops and -XX:+UseCompressedClassPointers options are disabled by default. Enabling them will have no effect.
  • Class unloading is not supported. The -XX:+ClassUnloading and -XX:+ClassUnloadingWithConcurrentMark options are disabled by default. Enabling them will have no effect.
  • Using ZGC in combination with Graal is not supported.

Java Lazy Allocation of Compiler Threads

Command line flag -XX:+UseDynamicNumberOfCompilerThreads dynamically control compiler threads.

The VM starts a large number of compiler threads on systems with many CPUs regardless of the available memory and the number of compilation requests. Because the threads consume memory even when they are idle (which is almost all of the time), this leads to an inefficient use of resources.

To address this issue, the implementation has been changed to start only one compiler thread of each type during startup and to handle the start and shutdown of further threads dynamically. It is controlled by a new command line flag, which is on by default:

-XX:+UseDynamicNumberOfCompilerThreads

Java Collection.toArray(IntFunction) Default Method

A new default method toArray(IntFunction) has been added to the java.util.Collection interface.

This method allows the collection's elements to be transferred to a newly created array of a desired runtime type. The new method is an overload of the existing toArray(T[]) method that takes an array instance as an argument. The addition of the overloaded method creates a minor source incompatibility. Previously, code of the form coll.toArray(null) would always resolve to the existing toArray method. With the new overloaded method, this code is now ambiguous and will result in a compile-time error. (This is only a source incompatibility. Existing binaries are unaffected.)

The ambiguous code should be changed to cast null to the desired array type, for example, toArray((Object[])null) or some other array type. Note that passing null to either toArray method is specified to throw NullPointerException.


What are the important changes in Java JDK 11?

These are few details about the changes described below are provided in JDK 11 Release Notes.


  • The deployment stack, required for Applets and Web Start Applications, was deprecated in JDK 9 and has been removed in JDK 11.
  • Without a deployment stack, the entire section of supported browsers has been removed from the list of supported configurations of JDK 11.
  • Auto-update, which was available for JRE installations on Windows and macOS, is no longer available.
  • In Windows and macOS, installing the JDK in previous releases optionally installed a JRE. In JDK 11, this is no longer an option.
  • In this release, the JRE or Server JRE is no longer offered. Only the JDK is offered. Users can use jlink to create smaller custom runtimes.
  • JavaFX is no longer included in the JDK. It is now available as a separate download from openjfx.io.
  • Java Mission Control, which was shipped in JDK 7, 8, 9, and 10, is no longer included with the Oracle JDK. It is now a separate download.
  • Previous releases were translated into English, Japanese, and Simplified Chinese as well as French, German, Italian, Korean, Portuguese (Brazilian), Spanish, and Swedish. However, in JDK 11 and later, French, German, Italian, Korean, Portuguese (Brazilian), Spanish, and Swedish translations are no longer provided.
  • Updated packaging format for Windows has changed from tar.gz to .zip, which is more common in Windows OSs.
  • Updated package format for macOS has changed from .app to .dmg, which is more in line with the standard for macOS.

Java FileSystems.newFileSystem Method

Three new methods have been added to java.nio.file.FileSystems

newFileSystem(Path)
newFileSystem(Path, Map<String, ?>)
newFileSystem(Path, Map<String, ?>, ClassLoader)

newFileSystem(Path, Map<String, ?>) creates a source (but not binary) compatibility issue for code that has been using the existing 2-arg newFileSystem(Path, ClassLoader) and specifying the class loader as null. For example, the following cannot be compiled because the reference to newFileSystem is ambiguous:

FileSystem fs = FileSystems.newFileSystem(path, null);

To avoid the ambiguous reference, this code needs to be modified to cast the second parameter to java.lang.ClassLoader.

What is the Differences Between Oracle JDK and Oracle's OpenJDK?

Differences Between Oracle JDK and Oracle's OpenJDK:


  • Oracle JDK offers "installers" (msi, rpm, deb, etc.) which not only place the JDK binaries in your system but also contain update rules and in some cases handle some common configurations like set common environmental variables (such as, JAVA_HOME in Windows) and establish file associations (such as, use java to launch .jar files). OpenJDK is offered only as compressed archive (tar.gz or .zip).
  • javac —release for release values 9 and 10 behave differently. Oracle JDK binaries include APIs that were not added to OpenJDK binaries such as javafx, resource management, and (pre JDK 11 changes) JFR APIs.
  • Usage Logging is only available in Oracle JDK.
  • Oracle JDK requires that third-party cryptographic providers be signed with a Java Cryptography Extension (JCE) Code Signing Certificate. OpenJDK continues allowing the use of unsigned third-party crypto providers.
  • The output of java -version is different. Oracle JDK returns java and includes the Oracle-specific identifier. OpenJDK returns OpenJDK and does not include the Oracle-specific identifier.
  • Oracle JDK is released under the OTN License. OpenJDK is released under GPLv2wCP. License files included with each will therefore be different.
  • Oracle JDK distributes FreeType under the FreeType license and OpenJDK does so under GPLv2. The contents of \legal\java.desktop\freetype.md is therefore different.
  • Oracle JDK has Java cup and steam icons and OpenJDK has Duke icons.
  • Oracle JDK source code includes "ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms." Source code distributed with OpenJDK refers to the GPL license terms instead.

Spring Integration - Integration Pattern

Integration Pattern

The IntegrationPattern abstraction has been introduced to indicate which enterprise integration pattern (an IntegrationPatternType) and category a Spring Integration component belongs to.

See JavaDocs and Integration Graph.

ReactiveMessageHandler

The ReactiveMessageHandler is now natively supported in the framework. See ReactiveMessageHandler for more information.

2020-02-21

Angular 9 Is Now Available - What new?

The 9.0.0 release of Angular is here! This release including the framework, Angular Material, and the CLI. This release switches applications to the Ivy compiler and runtime by default, and introduces improved ways of testing components.

This is one of the biggest updates to Angular made in the past 3 years, it empowers developers to build better applications and contribute to the Angular ecosystem.

How to update to version 9


First, update to the latest version of 8
ng update @angular/cli@8 @angular/core@8

Then, update to 9
ng update @angular/cli @angular/core

Ivy

Version 9 moves all applications to use the Ivy compiler and runtime by default.

Ivy compiler and runtime advantages:


  • Smaller bundle sizes
  • Faster testing
  • Better debugging
  • Improved CSS class and style binding
  • Improved type checking
  • Improved build errors
  • Improved build times, enabling AOT on by default
  • Improved Internationalization


Faster testing


Previously, TestBed would recompile all components between the running of each test, regardless of whether there were any changes made to components.

In Ivy, TestBed doesn’t recompile components between tests unless a component has been manually overridden, which allows it to avoid recompilation between the grand majority of tests.

Improved CSS class and style binding

The Ivy compiler and runtime provides improvements for handling styles.

<my-component style="color:red;" [style.color]="myColor" [style]="{color: myOtherColor}" myDirective></div>

@Component({
  host: {
    style: "color:blue"
  },...
})
...

@Directive({
  host: {
    style: "color:black",
    "[style.color]": "property"
  },...
})
...

<div [style.--main-border-color]=" '#CCC' ">
  <p style="border: 1px solid var(--main-border-color)">hi</p>
</div>

Improved type checking

These features will help you and your team catch bugs earlier in the development process.

  • fullTemplateTypeCheck — Activating this flag tells the compiler to check everything within your template (ngIf, ngFor, ng-template, etc)
  • strictTemplates — Activating this flag will apply the strictest Type System rules for type checking.

New components

You can now include capabilities from YouTube and Google Maps in your applications.
  • You can render a YouTube Player inline within your application with the new youtube-player. After you load the YouTube IFrame player API, this component will take advantage of it.
  • We are also introducing google-maps components. These components make it easy to render Google Maps, display markers, and wire up interactivity in a way that works like a normal Angular component, saving you from needing to learn the full Google Maps API.

Lowest common ancestor

Write a program to find the least common ancestor?

The lowest common ancestor (LCA) of two nodes v and w in a tree, where we define each node to be a descendant of itself (so if v has a direct connection from w, w is the lowest common ancestor).

The LCA of v and w in T is the shared ancestor of v and w that is located farthest from the root. Computation of lowest common ancestors may be useful, for instance, as part of a procedure for determining the distance between pairs of nodes in a tree: the distance from v to w can be computed as the distance from the root to v, plus the distance from the root to w, minus twice the distance from the root to their lowest common ancestor (Djidjev, Pantziou & Zaroliagis 1991). In ontologies, the lowest common ancestor is also known as the least common ancestor.



// This function returns pointer to LCA of two given 
    // values n1 and n2. 
    // v1 is set as true by this function if n1 is found 
    // v2 is set as true by this function if n2 is found 
    Node findLCAUtil(Node node, int n1, int n2) 
    { 
        // Base case 
        if (node == null) 
            return null; 
          
        //Store result in temp, in case of key match so that we can search for other key also. 
        Node temp=null; 
  
        // If either n1 or n2 matches with root's key, report the presence 
        // by setting v1 or v2 as true and return root (Note that if a key 
        // is ancestor of other, then the ancestor key becomes LCA) 
        if (node.data == n1) 
        { 
            v1 = true; 
            temp = node; 
        } 
        if (node.data == n2) 
        { 
            v2 = true; 
            temp = node; 
        } 
  
        // Look for keys in left and right subtrees 
        Node left_lca = findLCAUtil(node.left, n1, n2); 
        Node right_lca = findLCAUtil(node.right, n1, n2); 
  
        if (temp != null) 
            return temp; 
  
        // If both of the above calls return Non-NULL, then one key 
        // is present in once subtree and other is present in other, 
        // So this node is the LCA 
        if (left_lca != null && right_lca != null) 
            return node; 
  
        // Otherwise check if left subtree or right subtree is LCA 
        return (left_lca != null) ? left_lca : right_lca; 
    } 



Time Complexity: Time complexity of the above solution is O(n).


Spring Elasticsearch Repositories

Elasticsearch Repositories

Query creation

Generally the query creation mechanism for Elasticsearch works as described in Query methods. Here’s a short example of what a Elasticsearch query method translates into:

Query creation from method names

interface BookRepository extends Repository<Book, String> {
  List<Book> findByNameAndPrice(String name, Integer price);
}
The method name above will be translated into the following Elasticsearch json query

{ "bool" :
    { "must" :
        [
            { "field" : {"name" : "?"} },
            { "field" : {"price" : "?"} }
        ]
    }
}

Using @Query Annotation

Declare query at the method using the @Query annotation.
interface BookRepository extends ElasticsearchRepository<Book, String> {
    @Query("{\"bool\" : {\"must\" : {\"field\" : {\"name\" : \"?0\"}}}}")
    Page<Book> findByName(String name,Pageable pageable);
}

Annotation based configuration


Spring Data Elasticsearch repositories using JavaConfig
@Configuration
@EnableElasticsearchRepositories(                             
  basePackages = "org.springframework.data.elasticsearch.repositories"
  )
static class Config {

  @Bean
  public ElasticsearchOperations elasticsearchTemplate() {    
      // ...
  }
}

class ProductService {

  private ProductRepository repository;                       

  public ProductService(ProductRepository repository) {
    this.repository = repository;
  }

  public Page<Product> findAvailableBookByName(String name, Pageable pageable) {
    return repository.findByAvailableTrueAndNameStartingWith(name, pageable);
  }
}
The EnableElasticsearchRepositories annotation activates the Repository support. If no base package is configured, it will use the one of the configuration class it is put on.
Provide a Bean named elasticsearchTemplate of type ElasticsearchOperations by using one of the configurations shown in the Elasticsearch Operations chapter.
Let Spring inject the Repository bean into your class.


Spring Data Elasticsearch repositories using CDI

class ElasticsearchTemplateProducer {

  @Produces
  @ApplicationScoped
  public ElasticsearchOperations createElasticsearchTemplate() {
    // ...                             
  }
}

class ProductService {

  private ProductRepository repository; 
  public Page<Product> findAvailableBookByName(String name, Pageable pageable) {
    return repository.findByAvailableTrueAndNameStartingWith(name, pageable);
  }
  @Inject
  public void setRepository(ProductRepository repository) {
    this.repository = repository;
  }
}



Spring Elasticsearch Operations

Elasticsearch Operations

Spring Data Elasticsearch uses two interfaces to define the operations that can be called against an Elasticsearch index. These are ElasticsearchOperations and ReactiveElasticsearchOperations. Whereas the first is used with the classic synchronous implementations, the second one uses reactive infrastructure.

The default implementations of the interfaces offer:

Read/Write mapping support for domain types.
A rich query and criteria api.
Resource management and Exception translation.

ElasticsearchTemplate

The ElasticsearchTemplate is an implementation of the ElasticsearchOperations interface using the Transport Client.

@Configuration
public class TransportClientConfig extends ElasticsearchConfigurationSupport {

  @Bean
  public Client elasticsearchClient() throws UnknownHostException {                 
    Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build();
    TransportClient client = new PreBuiltTransportClient(settings);
    client.addTransportAddress(new TransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
    return client;
  }

  @Bean(name = {"elasticsearchOperations", "elasticsearchTemplate"})
  public ElasticsearchTemplate elasticsearchTemplate() throws UnknownHostException { 
  return new ElasticsearchTemplate(elasticsearchClient(), entityMapper());
  }

  // use the ElasticsearchEntityMapper
  @Bean
  @Override
  public EntityMapper entityMapper() {                                               
    ElasticsearchEntityMapper entityMapper = new ElasticsearchEntityMapper(elasticsearchMappingContext(),
    new DefaultConversionService());
    entityMapper.setConversions(elasticsearchCustomConversions());
    return entityMapper;
  }
}

ElasticsearchRestTemplate

The ElasticsearchRestTemplate is an implementation of the ElasticsearchOperations interface using the High Level REST Client.

@Configuration
public class RestClientConfig extends AbstractElasticsearchConfiguration {
  @Override
  public RestHighLevelClient elasticsearchClient() {       
    return RestClients.create(ClientConfiguration.localhost()).rest();
  }

  // no special bean creation needed                       

  // use the ElasticsearchEntityMapper
  @Bean
  @Override
  public EntityMapper entityMapper() {                     
    ElasticsearchEntityMapper entityMapper = new ElasticsearchEntityMapper(elasticsearchMappingContext(),
        new DefaultConversionService());
    entityMapper.setConversions(elasticsearchCustomConversions());

    return entityMapper;
  }
}

Both ElasticsearchTemplate and ElasticsearchRestTemplate implement the ElasticsearchOperations interface, the code to use them is not different. The example shows how to use an injected ElasticsearchOperations instance in a Spring REST controller. The decision, if this is using the TransportClient or the RestClient is made by providing the corresponding Bean with one of the configurations shown above.

@RestController
@RequestMapping("/")
public class TestController {

  private  ElasticsearchOperations elasticsearchOperations;

  public TestController(ElasticsearchOperations elasticsearchOperations) { 
    this.elasticsearchOperations = elasticsearchOperations;
  }

  @PostMapping("/person")
  public String save(@RequestBody Person person) {                         

    IndexQuery indexQuery = new IndexQueryBuilder()
      .withId(person.getId().toString())
      .withObject(person)
      .build();
    String documentId = elasticsearchOperations.index(indexQuery);
    return documentId;
  }

  @GetMapping("/person/{id}")
  public Person findById(@PathVariable("id")  Long id) {                   
    Person person = elasticsearchOperations
      .queryForObject(GetQuery.getById(id.toString()), Person.class);
    return person;
  }
}

Reactive Template Configuration

The easiest way of setting up the ReactiveElasticsearchTemplate is via AbstractReactiveElasticsearchConfiguration providing dedicated configuration method hooks for base package, the initial entity set etc.

ReactiveElasticsearchOperations is the gateway to executing high level commands against an Elasticsearch cluster using the ReactiveElasticsearchClient.

The ReactiveElasticsearchTemplate is the default implementation of ReactiveElasticsearchOperations.

To get started the ReactiveElasticsearchTemplate needs to know about the actual client to work with.

@Configuration
public class Config extends AbstractReactiveElasticsearchConfiguration {

  @Bean 
  @Override
  public ReactiveElasticsearchClient reactiveElasticsearchClient() {
      // ...
  }
}

Configure the ReactiveElasticsearchTemplate

@Configuration
public class Config {

  @Bean
  public ReactiveElasticsearchClient reactiveElasticsearchClient() {
    // ...
  }
  @Bean
  public ElasticsearchConverter elasticsearchConverter() {
    return new MappingElasticsearchConverter(elasticsearchMappingContext());
  }
  @Bean
  public SimpleElasticsearchMappingContext elasticsearchMappingContext() {
    return new SimpleElasticsearchMappingContext();
  }
  @Bean
  public ReactiveElasticsearchOperations reactiveElasticsearchOperations() {
    return new ReactiveElasticsearchTemplate(reactiveElasticsearchClient(), elasticsearchConverter());
  }
}


Use the ReactiveElasticsearchTemplate

ReactiveElasticsearchTemplate lets you save, find and delete your domain objects and map those objects to documents stored in Elasticsearch.

@Document(indexName = "marvel", type = "characters")
public class Person {

  private @Id String id;
  private String name;
  private int age;
  // Getter/Setter omitted...
}
template.save(new Person("Bruce Banner", 42))                    
  .doOnNext(System.out::println)
  .flatMap(person -> template.findById(person.id, Person.class)) 
  .doOnNext(System.out::println)
  .flatMap(person -> template.delete(person))                    
  .doOnNext(System.out::println)
  .flatMap(id -> template.count(Person.class))                   
  .doOnNext(System.out::println)
  .subscribe();

Spring Elasticsearch Object Mapping

Elasticsearch Object Mapping

Spring Data Elasticsearch allows to choose between two mapping implementations abstracted via the EntityMapper interface:

  • Jackson Object Mapping
  • Meta Model Object Mapping


Jackson Object Mapping

The Jackson2 based approach (used by default) utilizes a customized ObjectMapper instance with spring data specific modules. Extensions to the actual mapping need to be customized via Jackson annotations like @JsonInclude.

@Configuration
public class Config extends AbstractElasticsearchConfiguration { 

  @Override
  public RestHighLevelClient elasticsearchClient() {
    return RestClients.create(ClientConfiguration.create("localhost:9200")).rest();
  }
}

AbstractElasticsearchConfiguration already defines a Jackson2 based entityMapper via ElasticsearchConfigurationSupport.
CustomConversions, @ReadingConverter & @WritingConverter cannot be applied when using the Jackson based EntityMapper.
Setting the name of a mapped field with @Field(name="custom-name") also cannot be used with this Mapper.

Meta Model Object Mapping

The Metamodel based approach uses domain type information for reading/writing from/to Elasticsearch. This allows to register Converter instances for specific domain type mapping.


@Configuration
public class Config extends AbstractElasticsearchConfiguration {

  @Override
  public RestHighLevelClient elasticsearchClient() {
    return RestClients.create(ClientConfiguration.create("localhost:9200")).rest()
  }

  @Bean
  @Override
  public EntityMapper entityMapper() {                                 

    ElasticsearchEntityMapper entityMapper = new ElasticsearchEntityMapper(
      elasticsearchMappingContext(), new DefaultConversionService()    
    );
    entityMapper.setConversions(elasticsearchCustomConversions());     

  return entityMapper;
  }
}

Overwrite the default EntityMapper from ElasticsearchConfigurationSupport and expose it as bean.
Use the provided SimpleElasticsearchMappingContext to avoid inconsistencies and provide a GenericConversionService for Converter registration.
Optionally set CustomConversions if applicable.

Mapping Annotation


The ElasticsearchEntityMapper can use metadata to drive the mapping of objects to documents. The following annotations are available:

@Id: Applied at the field level to mark the field used for identity purpose.

@Document: Applied at the class level to indicate this class is a candidate for mapping to the database. The most important attributes are:

indexName: the name of the index to store this entity in

type: the mapping type. If not set, the lowercased simple name of the class is used.

shards: the number of shards for the index.

replicas: the number of replicas for the index.

refreshIntervall: Refresh interval for the index. Used for index creation. Default value is "1s".

indexStoreType: Index storage type for the index. Used for index creation. Default value is "fs".

createIndex: Configuration whether to create an index on repository bootstrapping. Default value is true.

versionType: Configuration of version management. Default value is EXTERNAL.

@Transient: By default all private fields are mapped to the document, this annotation excludes the field where it is applied from being stored in the database

@PersistenceConstructor: Marks a given constructor - even a package protected one - to use when instantiating the object from the database. Constructor arguments are mapped by name to the key values in the retrieved Document.

@Field: Applied at the field level and defines properties of the field, most of the attributes map to the respective Elasticsearch Mapping definitions:

name: The name of the field as it will be represented in the Elasticsearch document, if not set, the Java field name is used.

type: the field type, can be one of Text, Integer, Long, Date, Float, Double, Boolean, Object, Auto, Nested, Ip, Attachment, Keyword.

format and pattern custom definitions for the Date type.

store: Flag wether the original field value should be store in Elasticsearch, default value is false.

analyzer, searchAnalyzer, normalizer for specifying custom custom analyzers and normalizer.

copy_to: the target field to copy multiple document fields to.

@GeoPoint: marks a field as geo_point datatype. Can be omitted if the field is an instance of the GeoPoint class.

Mapping Rules

Type Hints

Mapping uses type hints embedded in the document sent to the server to allow generic type mapping. Those type hints are represented as _class attributes within the document and are written for each aggregate root.

public class Person {              

  @Id String id;
  String firstname;
  String lastname;
}
{
  "_class" : "com.example.Person", 
  "id" : "cb7bef",
  "firstname" : "Sarah",
  "lastname" : "Connor"
}


Type Hints with Alias
@TypeAlias("human")                
public class Person {

  @Id String id;
  // ...
}
{
  "_class" : "human",              
  "id" : ...
}


Geospatial Types

Geospatial types like Point & GeoPoint are converted into lat/lon pairs.

public class Address {

  String city, street;
  Point location;
}
{
  "city" : "Los Angeles",
  "street" : "2800 East Observatory Road",
  "location" : { "lat" : 34.118347, "lon" : -118.3026284 }
}


Collections

public class Person {

  // ...

  List<Person> friends;

}
{
  // ...

  "friends" : [ { "firstname" : "Kyle", "lastname" : "Reese" } ]
}

Collections Map


public class Person {

  // ...

  Map<String, Address> knownLocations;

}
{
  // ...

  "knownLocations" : {
    "arrivedAt" : {
       "city" : "Los Angeles",
       "street" : "2800 East Observatory Road",
       "location" : { "lat" : 34.118347, "lon" : -118.3026284 }
     }
  }
}



Spring Elasticsearch Clients

Spring Elasticsearch Clients

Spring data Elasticsearch operates upon an Elasticsearch client that is connected to a single Elasticsearch node or a cluster. Although the Elasticsearch Client can be used to work with the cluster, applications using Spring Data Elasticsearch normally use the higher level abstractions of Elasticsearch Operations and Elasticsearch Repositories.

Transport Client

static class Config {

  @Bean
  Client client() {
  Settings settings = Settings.builder()
    .put("cluster.name", "elasticsearch")   
      .build();
  TransportClient client = new PreBuiltTransportClient(settings);
    client.addTransportAddress(new TransportAddress(InetAddress.getByName("127.0.0.1")
      , 9300));                               
    return client;
  }
}

// ...

IndexRequest request = new IndexRequest("spring-data", "elasticsearch", randomID())
 .source(someObject)
 .setRefreshPolicy(IMMEDIATE);


High Level REST Client

The Java High Level REST Client now is the default client of Elasticsearch, it provides a straight forward replacement for the TransportClient as it accepts and returns the very same request/response objects and therefore depends on the Elasticsearch core project. Asynchronous calls are operated upon a client managed thread pool and require a callback to be notified when the request is done.

import org.springframework.beans.factory.annotation.Autowired;@Configuration
static class Config {

  @Bean
  RestHighLevelClient client() {

    ClientConfiguration clientConfiguration = ClientConfiguration.builder() 
      .connectedTo("localhost:9200", "localhost:9201")
      .build();

    return RestClients.create(clientConfiguration).rest();                  
  }
}

// ...

  @Autowired
  RestHighLevelClient highLevelClient;

  RestClient lowLevelClient = highLevelClient.lowLevelClient();             

// ...

IndexRequest request = new IndexRequest("spring-data", "elasticsearch", randomID())
  .source(singletonMap("feature", "high-level-rest-client"))
  .setRefreshPolicy(IMMEDIATE);

IndexResponse response = highLevelClient.index(request);

Reactive Client


The ReactiveElasticsearchClient is a non official driver based on WebClient. It uses the request/response objects provided by the Elasticsearch core project. Calls are directly operated on the reactive stack, not wrapping async (thread pool bound) responses into reactive types.

static class Config {

  @Bean
  ReactiveElasticsearchClient client() {

    ClientConfiguration clientConfiguration = ClientConfiguration.builder()   
      .connectedTo("localhost:9200", "localhost:9291")
      .withWebClientConfigurer(webClient -> {                                 
        ExchangeStrategies exchangeStrategies = ExchangeStrategies.builder()
            .codecs(configurer -> configurer.defaultCodecs()
                .maxInMemorySize(-1))
            .build();
        return webClient.mutate().exchangeStrategies(exchangeStrategies).build();
       })
      .build();

    return ReactiveRestClients.create(clientConfiguration);
  }
}

// ...

Mono<IndexResponse> response = client.index(request ->

  request.index("spring-data")
    .type("elasticsearch")
    .id(randomID())
    .source(singletonMap("feature", "reactive-client"))
    .setRefreshPolicy(IMMEDIATE);
);

Client Configuration

Client behavior can be changed via the ClientConfiguration that allows to set options for SSL, connect and socket timeouts.

// optional if Basic Auhtentication is needed
HttpHeaders defaultHeaders = new HttpHeaders();
defaultHeaders.setBasicAuth(USER_NAME, USER_PASS);                      

ClientConfiguration clientConfiguration = ClientConfiguration.builder()
  .connectedTo("localhost:9200", "localhost:9291")                      
  .withConnectTimeout(Duration.ofSeconds(5))                            
  .withSocketTimeout(Duration.ofSeconds(3))                             
  .useSsl()                                                             
  .withDefaultHeaders(defaultHeaders)                                   
  .withBasicAuth(username, password)                                    
  . // ... other options
  .build();



2020-02-20

Core Java HTTP Client

The HTTP Client was added in Java 11.

It can be used to request HTTP resources over the network. It supports HTTP/1.1 and HTTP/2, both synchronous and asynchronous programming models, handles request and response bodies as reactive-streams.

GET request that prints the response body as a String

HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
      .uri(URI.create("http://openjdk.java.net/"))
      .build();
client.sendAsync(request, BodyHandlers.ofString())
      .thenApply(HttpResponse::body)
      .thenAccept(System.out::println)
      .join();

HttpClient

To send a request, first create an HttpClient from its builder.


  • The preferred protocol version ( HTTP/1.1 or HTTP/2 )
  • Whether to follow redirects
  • A proxy
  • An authenticator


HttpClient client = HttpClient.newBuilder()
      .version(Version.HTTP_2)
      .followRedirects(Redirect.SAME_PROTOCOL)
      .proxy(ProxySelector.of(new InetSocketAddress("www-proxy.com", 8080)))
      .authenticator(Authenticator.getDefault())
      .build();

HttpRequest

An HttpRequest is created from its builder.


  • The request URI
  • The request method ( GET, PUT, POST )
  • The request body ( if any )
  • A timeout
  • Request headers

HttpRequest request = HttpRequest.newBuilder()
      .uri(URI.create("http://openjdk.java.net/"))
      .timeout(Duration.ofMinutes(1))
      .header("Content-Type", "application/json")
      .POST(BodyPublishers.ofFile(Paths.get("file.json")))
      .build()