2020-05-27

Spring Data Elasticsearch MappingElasticsearchConverter

Spring Data Elasticsearch MappingElasticsearchConverter 


The MappingElasticsearchConverter uses metadata to drive the mapping of objects to documents. The metadata is taken from the entity’s properties which can be annotated.

The following annotations are available:

@Document: Applied at the class level to indicate this class is a candidate for mapping to the database. The most important attributes are:

indexName: the name of the index to store this entity in

type: the mapping type. If not set, the lowercased simple name of the class is used. (deprecated since version 4.0)

shards: the number of shards for the index.

replicas: the number of replicas for the index.

refreshIntervall: Refresh interval for the index. Used for index creation. Default value is "1s".

indexStoreType: Index storage type for the index. Used for index creation. Default value is "fs".

createIndex: Configuration whether to create an index on repository bootstrapping. Default value is true.

versionType: Configuration of version management. Default value is EXTERNAL.

@Id: Applied at the field level to mark the field used for identity purpose.

@Transient: By default all fields are mapped to the document when it is stored or retrieved, this annotation excludes the field.

@PersistenceConstructor: Marks a given constructor - even a package protected one - to use when instantiating the object from the database. Constructor arguments are mapped by name to the key values in the retrieved Document.

@Field: Applied at the field level and defines properties of the field, most of the attributes map to the respective Elasticsearch Mapping definitions (the following list is not complete, check the annotation Javadoc for a complete reference):

name: The name of the field as it will be represented in the Elasticsearch document, if not set, the Java field name is used.

type: the field type, can be one of Text, Keyword, Long, Integer, Short, Byte, Double, Float, Half_Float, Scaled_Float, Date, Date_Nanos, Boolean, Binary, Integer_Range, Float_Range, Long_Range, Double_Range, Date_Range, Ip_Range, Object, Nested, Ip, TokenCount, Percolator, Flattened, Search_As_You_Type. See Elasticsearch Mapping Types

format and pattern custom definitions for the Date type.

store: Flag wether the original field value should be store in Elasticsearch, default value is false.

analyzer, searchAnalyzer, normalizer for specifying custom custom analyzers and normalizer.

@GeoPoint: marks a field as geo_point datatype. Can be omitted if the field is an instance of the GeoPoint class.

Spring Data Elasticsearch @Field

Spring Data Elasticsearch @Field

The @Field annotation now supports nearly all of the types that can be used in Elasticsearch.

@Document(indexName = "person", type = "dummy")
public class Person implements Persistable<Long> {

    @Nullable @Id
    private Long id;

    @Nullable @Field(value = "last-name", type = FieldType.Text, fielddata = true)
    private String lastName;      (1)

    @Nullable @Field(name = "birth-date", type = FieldType.Date, format = DateFormat.basic_date)
    private LocalDate birthDate;  (2)

    @CreatedDate
    @Nullable @Field(type = FieldType.Date, format = DateFormat.basic_date_time)
    private Instant created;      (3)

    // other properties, getter, setter
}


  • in Elasticsearch this field will be named last-name, this mapping is handled transparently
  • a property for a date without time information
  • another property this time with full date and time information

2020-05-25

Hibernate JPA @FilterJoinTables Example

@FilterJoinTables

The @FilterJoinTables annotation is used to group multiple @FilterJoinTable annotations.

FilterJoinTables- Add multiple @FilterJoinTable to a collection.

Hibernate JPA @FilterJoinTable Example

@FilterJoinTable

The @FilterJoinTable annotation is used to add @Filter capabilities to a join table collection.

FilterJoinTable - Add filters to a join table collection.


@FilterJoinTable

When using the @Filter annotation with collections, the filtering is done against the child entries (entities or embeddables). However, if you have a link table between the parent entity and the child table, then you need to use the @FilterJoinTable to filter child entries according to some column contained in the join table.

The @FilterJoinTable annotation can be, therefore, applied to a unidirectional @OneToMany collection as illustrated in the following mapping:

Example :  @FilterJoinTable mapping usage

@Entity(name = "Client")
@FilterDef(
    name="firstAccounts",
    parameters=@ParamDef(
        name="maxOrderId",
        type="int"
    )
)
public static class Client {

    @Id
    private Long id;

    private String name;

    @OneToMany(cascade = CascadeType.ALL)
    @OrderColumn(name = "order_id")
    @FilterJoinTable(
        name="firstAccounts",
        condition="order_id <= :maxOrderId"
    )
    private List<Account> accounts = new ArrayList<>( );

    //Getters and setters omitted for brevity

    public void addAccount(Account account) {
        this.accounts.add( account );
    }
}

@Entity(name = "Account")
public static class Account {

    @Id
    private Long id;

    @Column(name = "account_type")
    @Enumerated(EnumType.STRING)
    private AccountType type;

    private Double amount;

    private Double rate;

    //Getters and setters omitted for brevity
}

Hibernate JPA @FilterDefs Example

@FilterDefs

The @FilterDefs annotation is used to group multiple @FilterDef annotations.

FilterDefs - Array of filter definitions.

Hibernate JPA @FilterDef Example

@FilterDef

The @FilterDef annotation is used to specify a @Filter definition (name, default condition and parameter types, if any).

FilterDef
Filter definition. Defines a name, default condition and parameter types (if any).

@Filter mapping entity-level usage

@Entity(name = "Account")
@Table(name = "account")
@FilterDef(
    name="activeAccount",
    parameters = @ParamDef(
        name="active",
        type="boolean"
    )
)
@Filter(
    name="activeAccount",
    condition="active_status = :active"
)
public static class Account {

    @Id
    private Long id;

    @ManyToOne(fetch = FetchType.LAZY)
    private Client client;

    @Column(name = "account_type")
    @Enumerated(EnumType.STRING)
    private AccountType type;

    private Double amount;

    private Double rate;

    @Column(name = "active_status")
    private boolean active;

    //Getters and setters omitted for brevity
}

Hibernate JPA @Filter Example

@Filter

The @Filter annotation is used to add filters to an entity or the target entity of a collection.

Filter Add filters to an entity or a target entity of a collection.

@Filter
The @Filter annotation is another way to filter out entities or collections using custom SQL criteria. Unlike the @Where annotation, @Filter allows you to parameterize the filter clause at runtime.

Now, considering we have the following Account entity:

Example : @Filter mapping entity-level usage

@Entity(name = "Account")
@Table(name = "account")
@FilterDef(
    name="activeAccount",
    parameters = @ParamDef(
        name="active",
        type="boolean"
    )
)
@Filter(
    name="activeAccount",
    condition="active_status = :active"
)
public static class Account {

    @Id
    private Long id;

    @ManyToOne(fetch = FetchType.LAZY)
    private Client client;

    @Column(name = "account_type")
    @Enumerated(EnumType.STRING)
    private AccountType type;

    private Double amount;

    private Double rate;

    @Column(name = "active_status")
    private boolean active;

    //Getters and setters omitted for brevity
}
Notice that the active property is mapped to the active_status column.

This mapping was done to show you that the @Filter condition uses a SQL condition and not a JPQL filtering predicate.


@Filter mapping collection-level usage

@Entity(name = "Client")
@Table(name = "client")
public static class Client {

    @Id
    private Long id;

    private String name;

    private AccountType type;

    @OneToMany(
        mappedBy = "client",
        cascade = CascadeType.ALL
    )
    @Filter(
        name="activeAccount",
        condition="{a}.active_status = :active and {a}.type = {c}.type",
        aliases = {
                @SqlFragmentAlias( alias = "a", table= "account"),
                @SqlFragmentAlias( alias = "c", table= "client"),
        }
    )
    private List<Account> accounts = new ArrayList<>( );

    //Getters and setters omitted for brevity

    public void addAccount(Account account) {
        account.setClient( this );
        this.accounts.add( account );
    }
}

HIbernate JPA @FetchProfiles Example

@FetchProfiles

The @FetchProfiles annotation is used to group multiple @FetchProfile annotations.

FetchProfiles Collects together multiple fetch profiles.

Hibernate JPA @FetchProfile.FetchOverride Example

@FetchProfile.FetchOverride

The @FetchProfile.FetchOverride annotation is used in conjunction with the @FetchProfile annotation, and it’s used for overriding the fetching strategy of a particular entity association.

FetchProfile.FetchOverride Descriptor for a particular association override.

Fetch profile example

@Entity(name = "Employee")
@FetchProfile(
name = "employee.projects",
fetchOverrides = {
@FetchProfile.FetchOverride(
entity = Employee.class,
association = "projects",
mode = FetchMode.JOIN
)
}
)

Hibernate JPA @FetchProfile Example

@FetchProfile

The @FetchProfile annotation is used to specify a custom fetching profile, similar to a JPA Entity Graph.

FetchProfile Define the fetching strategy profile.

Fetch profile example

@Entity(name = "Employee")
@FetchProfile(
name = "employee.projects",
fetchOverrides = {
@FetchProfile.FetchOverride(
entity = Employee.class,
association = "projects",
mode = FetchMode.JOIN
)
}
)
session.enableFetchProfile( "employee.projects" );
Employee employee = session.bySimpleNaturalId( Employee.class ).load( username );

Here the Employee is obtained by natural-id lookup and the Employee’s Project data is fetched eagerly. If the Employee data is resolved from cache, the Project data is resolved on its own. However, if the Employee data is not resolved in cache, the Employee and Project data is resolved in one SQL query via join as we saw above.

2020-05-18

Java @Schedule Example

@Schedule


For example, you are writing code to use a timer service that enables you to run a method at a given time or on a certain schedule, similar to the UNIX cron service. Now you want to set a timer to run a method, doPeriodicCleanup, on the last day of the month and on every Friday at 11:00 p.m. To set the timer to run, create an @Schedule annotation and apply it twice to the doPeriodicCleanup method. The first use specifies the last day of the month and the second specifies Friday at 11p.m., as shown in the following code example:

@Schedule(dayOfMonth="last")
@Schedule(dayOfWeek="Fri", hour="23")
public void doPeriodicCleanup() { ... }

The previous example applies an annotation to a method. You can repeat an annotation anywhere that you would use a standard annotation. For example, you have a class for handling unauthorized access exceptions. You annotate the class with one @Alert annotation for managers and another for admins:

@Alert(role="Manager")
@Alert(role="Administrator")
public class UnauthorizedAccessException extends SecurityException { ... }

For compatibility reasons, repeating annotations are stored in a container annotation that is automatically generated by the Java compiler. In order for the compiler to do this, two declarations are required in your code.

Java @NonNull Example

@NonNull String str;


When you compile the code, including the NonNull module at the command line, the compiler prints a warning if it detects a potential problem, allowing you to modify the code to avoid the error. After you correct the code to remove all warnings, this particular error will not occur when the program runs.

You can use multiple type-checking modules where each module checks for a different kind of error. In this way, you can build on top of the Java type system, adding specific checks when and where you want them.

Java @Repeatable Example

@Repeatable 

@Repeatable annotation, introduced in Java SE 8, indicates that the marked annotation can be applied more than once to the same declaration or type use. For more information, see Repeating Annotations.

Java @Inherited Example

@Inherited 

@Inherited annotation indicates that the annotation type can be inherited from the super class. (This is not true by default.) When the user queries the annotation type and the class has no annotation for this type, the class' superclass is queried for the annotation type. This annotation applies only to class declarations.

Java @Target Example

@Target 

@Target annotation marks another annotation to restrict what kind of Java elements the annotation can be applied to. A target annotation specifies one of the following element types as its value:

ElementType.ANNOTATION_TYPE can be applied to an annotation type.
ElementType.CONSTRUCTOR can be applied to a constructor.
ElementType.FIELD can be applied to a field or property.
ElementType.LOCAL_VARIABLE can be applied to a local variable.
ElementType.METHOD can be applied to a method-level annotation.
ElementType.PACKAGE can be applied to a package declaration.
ElementType.PARAMETER can be applied to the parameters of a method.
ElementType.TYPE can be applied to any element of a class.

Java @Documented Example

@Documented 

@Documented annotation indicates that whenever the specified annotation is used those elements should be documented using the Javadoc tool. (By default, annotations are not included in Javadoc.) For more information, see the Javadoc tools page.

Java @Retention Example

@Retention 

@Retention annotation specifies how the marked annotation is stored:

RetentionPolicy.SOURCE – The marked annotation is retained only in the source level and is ignored by the compiler.
RetentionPolicy.CLASS – The marked annotation is retained by the compiler at compile time, but is ignored by the Java Virtual Machine (JVM).
RetentionPolicy.RUNTIME – The marked annotation is retained by the JVM so it can be used by the runtime environment.

Java @FunctionalInterface Example

@FunctionalInterface
@FunctionalInterface annotation, introduced in Java SE 8, indicates that the type declaration is intended to be a functional interface, as defined by the Java Language Specification.

Java @SafeVarargs Example

@SafeVarargs 

@SafeVarargs annotation, when applied to a method or constructor, asserts that the code does not perform potentially unsafe operations on its varargs parameter. When this annotation type is used, unchecked warnings relating to varargs usage are suppressed.

Java @SuppressWarnings Example

@SuppressWarnings

@SuppressWarnings annotation tells the compiler to suppress specific warnings that it would otherwise generate. In the following example, a deprecated method is used, and the compiler usually generates a warning. In this case, however, the annotation causes the warning to be suppressed.

   // use a deprecated method and tell
   // compiler not to generate a warning
   @SuppressWarnings("deprecation")
    void useDeprecatedMethod() {
        // deprecation warning
        // - suppressed
        objectOne.deprecatedMethod();
    }
Every compiler warning belongs to a category. The Java Language Specification lists two categories: deprecation and unchecked. The unchecked warning can occur when interfacing with legacy code written before the advent of generics. To suppress multiple categories of warnings, use the following syntax:

@SuppressWarnings({"unchecked", "deprecation"})

Java @Override Example

@Override 

@Override annotation informs the compiler that the element is meant to override an element declared in a superclass. Overriding methods will be discussed in Interfaces and Inheritance.

   // mark method as a superclass method
   // that has been overridden
   @Override
   int overriddenMethod() { }
While it is not required to use this annotation when overriding a method, it helps to prevent errors. If a method marked with @Override fails to correctly override a method in one of its superclasses, the compiler generates an error.

Java @Deprecated Example

@Deprecated 

@Deprecated annotation indicates that the marked element is deprecated and should no longer be used. The compiler generates a warning whenever a program uses a method, class, or field with the @Deprecated annotation. When an element is deprecated, it should also be documented using the Javadoc @deprecated tag, as shown in the following example. The use of the at sign (@) in both Javadoc comments and in annotations is not coincidental: they are related conceptually. Also, note that the Javadoc tag starts with a lowercase d and the annotation starts with an uppercase D.

   // Javadoc comment follows
    /**
     * @deprecated
     * explanation of why it was deprecated
     */
    @Deprecated
    static void deprecatedMethod() { }
}

Spring Events Example

Spring Events Example

Spring Data JDBC triggers events that get published to any matching ApplicationListener beans in the application context. For example, the following listener gets invoked before an aggregate gets saved:

@Bean
public ApplicationListener<BeforeSaveEvent<Object>> loggingSaves() {

return event -> {

Object entity = event.getEntity();
LOG.info("{} is getting saved.";
};
}

If you want to handle events only for a specific domain type you may derive your listener from AbstractRelationalEventListener and overwrite one or more of the onXXX methods, where XXX stands for an event type. Callback methods will only get invoked for events related to the domain type and their subtypes so you don’t require further casting.

public class PersonLoadListener extends AbstractRelationalEventListener<Person> {

@Override
protected void onAfterLoad(AfterLoadEvent<Person> personLoad) {
LOG.info(personLoad.getEntity());
}
}

The following table describes the available events:


BeforeDeleteEvent

Before an aggregate root gets deleted.

AfterDeleteEvent


After an aggregate root gets deleted.


BeforeConvertEvent


Before an aggregate root gets saved (that is, inserted or updated but after the decision about whether if it gets updated or deleted was made).

BeforeSaveEvent


Before an aggregate root gets saved (that is, inserted or updated but after the decision about whether if it gets updated or deleted was made).

AfterSaveEvent


After an aggregate root gets saved (that is, inserted or updated).

AfterLoadEvent


After an aggregate root gets created from a database ResultSet and all its property get set.

Spring PagingAndSortingRepository example

Spring PagingAndSortingRepository example

interface PersonRepository extends PagingAndSortingRepository<Person, String> {

  List<Person> findByFirstname(String firstname);                                 

  List<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable);

  Person findByFirstnameAndLastname(String firstname, String lastname);           

  Person findFirstByLastname(String lastname);                                     

  @Query("SELECT * FROM person WHERE lastname = :lastname")
  List<Person> findByLastname(String lastname);                                   
}
The method shows a query for all people with the given lastname.
The query is derived by parsing the method name for constraints that can be concatenated with And and Or.
Thus, the method name results in a query expression of SELECT … FROM person WHERE firstname = :firstname.
Use Pageable to pass offset and sorting parameters to the database.
Find a single entity for the given criteria.
It completes with IncorrectResultSizeDataAccessException on non-unique results.
In contrast to <3>, the first entity is always emitted even if the query yields more result documents.
The findByLastname method shows a query for all people with the given last name.

Elasticsearch POJO mapping example

Elasticsearch POJO mapping example

@Document(indexName = "person")
public class Person implements Persistable<Long> {

    @Id private Long id;
    private String lastName;
    private String firstName;

    @Field(type = Date)
    private Instant createdDate;
    private String createdBy

    @Field(type = Date)
    private Instant lastModifiedDate;
    private String lastModifiedBy;

    @Override
    public boolean isNew() {
        return id == null || (createdDate == null && createdBy == null);
    }
}

Elasticsearch - async search

Elasticsearch - async search

Asynchronous search

Asynchronous search makes long-running queries feasible and reliable. Async search allows users to run long-running queries in the background, track the query progress, and retrieve partial results as they become available. Async search enables users to more easily search vast amounts of data with no more pesky timeouts.

Submit async search API

Executes a search request asynchronously. It accepts the same parameters and request body as the search API.

POST /sales*/_async_search?size=0
{
    "sort" : [
      { "date" : {"order" : "asc"} }
    ],
    "aggs" : {
        "sale_date" : {
             "date_histogram" : {
                 "field" : "date",
                 "calendar_interval": "1d"
             }
         }
    }
}

The response contains an identifier of the search being executed. You can use this ID to later retrieve the search’s final results. The currently available search results are returned as part of the response object.

{
  "id" : "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
  "is_partial" : true,
  "is_running" : true,
  "start_time_in_millis" : 1583945890986,
  "expiration_time_in_millis" : 1584377890986,
  "response" : {
    "took" : 1122,
    "timed_out" : false,
    "num_reduce_phases" : 0,
    "_shards" : {
      "total" : 562,
      "successful" : 3,
      "skipped" : 0,
      "failed" : 0
    },
    "hits" : {
      "total" : {
        "value" : 157483,
        "relation" : "gte"
      },
      "max_score" : null,
      "hits" : [ ]
    }
  }
}

Identifier of the async search that can be used to monitor its progress, retrieve its results, and/or delete it


When the query is no longer running, indicates whether the search failed or was successfully completed on all shards. While the query is being executed, is_partial is always set to true


Whether the search is still being executed or it has completed


How many shards the search will be executed on, overall


How many shards have successfully completed the search


How many documents are currently matching the query, which belong to the shards that have already completed the search

The get async search API retrieves the results of a previously submitted async search request given its id. If the Elasticsearch security features are enabled. the access to the results of a specific async search is restricted to the user that submitted it in the first place.

GET /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=


{
  "id" : "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
  "is_partial" : true,
  "is_running" : true,
  "start_time_in_millis" : 1583945890986,
  "expiration_time_in_millis" : 1584377890986,
  "response" : {
    "took" : 12144,
    "timed_out" : false,
    "num_reduce_phases" : 46,
    "_shards" : {
      "total" : 562,
      "successful" : 188,
      "skipped" : 0,
      "failed" : 0
    },
    "hits" : {
      "total" : {
        "value" : 456433,
        "relation" : "eq"
      },
      "max_score" : null,
      "hits" : [ ]
    },
    "aggregations" : {
      "sale_date" :  {
        "buckets" : []
      }
    }
  }
}

When the query is no longer running, indicates whether the search failed or was successfully completed on all shards. While the query is being executed, is_partial is always set to true


Whether the search is still being executed or it has completed


When the async search will expire


Indicates how many reductions of the results have been performed. If this number increases compared to the last retrieved results, you can expect additional results included in the search response


Indicates how many shards have executed the query. Note that in order for shard results to be included in the search response, they need to be reduced first.


Partial aggregations results, coming from the shards that have already completed the execution of the query.

The wait_for_completion_timeout parameter can also be provided when calling the Get Async Search API, in order to wait for the search to be completed up until the provided timeout. Final results will be returned if available before the timeout expires, otherwise the currently available results will be returned once the timeout expires. By default no timeout is set meaning that the currently available results will be returned without any additional wait.

The keep_alive parameter specifies how long the async search should be available in the cluster. When not specified, the keep_alive set with the corresponding submit async request will be used. Otherwise, it is possible to override such value and extend the validity of the request. When this period expires, the search, if still running, is cancelled. If the search is completed, its saved results are deleted.

Delete async searchedit
You can use the delete async search API to manually delete an async search by ID. If the search is still running, the search request will be cancelled. Otherwise, the saved search results are deleted.

DELETE /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=

2020-05-15

Java SequenceLayout Example

SequenceLayout

A SequenceLayout denotes the repetition of a given layout. In other words, this can be thought of as a sequence of elements similar to an array with the defined element layout.

For example, we can create a sequence layout for 25 elements of 64 bits each:

SequenceLayout sequenceLayout = MemoryLayout.ofSequence(25,
  MemoryLayout.ofValueBits(64, ByteOrder.nativeOrder()));

Java ValueLayout Example

ValueLayout

A ValueLayout models a memory layout for basic data types such as integer and floating types. Each value layout has a size and a byte order. We can create a ValueLayout using the ofValueBits method:

ValueLayout valueLayout = MemoryLayout.ofValueBits(32, ByteOrder.nativeOrder());

Java - MemoryLayout

MemoryLayout

The MemoryLayout class lets us describe the contents of a memory segment. Specifically, it lets us define how the memory is broken up into elements, where the size of each element is provided.

This is a bit like describing the memory layout as a concrete type, but without providing a Java class. It's similar to how languages like C++ map their structures to memory.

Let's take an example of a cartesian coordinate point defined with the coordinates x and y:


int numberOfPoints = 10;
MemoryLayout pointLayout = MemoryLayout.ofStruct(
  MemoryLayout.ofValueBits(32, ByteOrder.BIG_ENDIAN).withName("x"),
  MemoryLayout.ofValueBits(32, ByteOrder.BIG_ENDIAN).withName("y")
);
SequenceLayout pointsLayout =
  MemoryLayout.ofSequence(numberOfPoints, pointLayout);

Here, we've defined a layout made of two 32-bit values named x and y. This layout can be used with a SequenceLayout to make something similar to an array, in this case with 10 indices.

Java MemoryAddress Example

MemoryAddress

A MemoryAddress is an offset within a memory segment. It's commonly obtained using the baseAddress method:

MemoryAddress address = MemorySegment.allocateNative(100).baseAddress();
A memory address is used to perform operations such as retrieving data from memory on the underlying memory segment.

Java MemorySegment Example

MemorySegment

A memory segment is a contiguous region of memory. This can be either heap or off-heap memory. And, there are several ways to obtain a memory segment.

A memory segment backed by native memory is known as a native memory segment. It's created using one of the overloaded allocateNative methods.

Let's create a native memory segment of 200 bytes:

MemorySegment memorySegment = MemorySegment.allocateNative(200);
A memory segment can also be backed by an existing heap-allocated Java array. For example, we can  create an array memory segment from an array of long:


MemorySegment memorySegment = MemorySegment.ofArray(new long[100]);
Additionally, a memory segment can be backed by an existing Java ByteBuffer. This is known as a buffer memory segment:

MemorySegment memorySegment = MemorySegment.ofByteBuffer(ByteBuffer.allocateDirect(200));
Alternatively, we can use a memory-mapped file. This is known as a mapped memory segment. Let's define a 200-byte memory segment using a file path with read-write access:


MemorySegment memorySegment = MemorySegment.mapFromPath(
  Path.of("/tmp/memory.txt"), 200, FileChannel.MapMode.READ_WRITE);
A memory segment is attached to a specific thread. So, if any other thread requires access to the memory segment, it must gain access using the acquire method.

Also, a memory segment has spatial and temporal boundaries in terms of memory access:

Spatial boundary — the memory segment has lower and upper limits
Temporal boundary — governs creating, using, and closing a memory segment
Together, spatial and temporal checks ensure the safety of the JVM.

Java - Foreign-Memory Access API

The Foreign-Memory Access API was proposed by JEP 370 and targeted to Java 14 in late 2019 as an incubating API. This JEP proposes to incorporate refinements based on feedback, and re-incubate the API in Java 15.

The following changes will be considered for inclusion:

A rich VarHandle combinator API, to customize memory access var handles;
Targeted support for parallel processing of a memory segment via the Spliterator interface;
Enhanced support for mapped memory segments (e.g., MappedMemorySegment::force);
Safe API points to support serial confinement (e.g., to transfer thread ownership between two threads); and
Unsafe API points to manipulate and dereference addresses coming from, e.g., native calls, or to wrap such addresses into synthetic memory segments.

Goals

Generality: A single API should be able to operate on various kinds of foreign memory (e.g., native memory, persistent memory, managed heap memory, etc.).
Safety: It should not be possible for the API to undermine the safety of the JVM, regardless of the kind of memory being operated upon.
Determinism: Deallocation operations on foreign memory should be explicit in source code.
Usability: For programs that need to access foreign memory, the API should be a compelling alternative to legacy Java APIs such as sun.misc.Unsafe.

The Foreign-Memory Access API introduces three main abstractions: MemorySegment, MemoryAddress, and MemoryLayout:

A MemorySegment models a contiguous memory region with given spatial and temporal bounds.
A MemoryAddress models an address. There are generally two kinds of addresses: A checked address is an offset within a given memory segment, while an unchecked address is an address whose spatial and temporal bounds are unknown, as in the case of a memory address obtained -- unsafely -- from native code.
A MemoryLayout is a programmatic description of a memory segment's contents.
Memory segments can be created from a variety of sources, such as native memory buffers, Java arrays, and byte buffers (either direct or heap-based). For instance, a native memory segment can be created as follows:

try (MemorySegment segment = MemorySegment.allocateNative(100)) {
   ...
}
This will create a memory segment that is associated with a native memory buffer whose size is 100 bytes.

Memory segments are spatially bounded, which means they have lower and upper bounds. Any attempt to use the segment to access memory outside of these bounds will result in an exception. As evidenced by the use of the try-with-resource construct, memory segments are also temporally bounded, which means they must be created, used, and then closed when no longer in use. Closing a segment is always an explicit operation and can result in additional side effects, such as deallocation of the memory associated with the segment. Any attempt to access an already-closed memory segment will result in an exception. Together, spatial and temporal bounding guarantee the safety of the Foreign-Memory Access API and thus guarantee that its use cannot crash the JVM.

Dereferencing the memory associated with a segment is achieved by obtaining a var handle, which is an abstraction for data access introduced in Java 9. In particular, a segment is dereferenced with a memory-access var handle. This kind of var handle has an access coordinate of type MemoryAddress that serves as the address at which the dereference occurs.

Memory-access var handles are obtained using factory methods in the MemoryHandles class. For instance, to set the elements of a native memory segment, we could use a memory-access var handle as follows:

VarHandle intHandle = MemoryHandles.varHandle(int.class,
        ByteOrder.nativeOrder());

try (MemorySegment segment = MemorySegment.allocateNative(100)) {
    MemoryAddress base = segment.baseAddress();
    for (int i = 0; i < 25; i++) {
        intHandle.set(base.addOffset(i * 4), i);
    }
}
Memory-access var handles can acquire extra access coordinates, of type long, to support more complex addressing schemes, such as multi-dimensional addressing of an otherwise flat memory segment. Such memory-access var handles are typically obtained by invoking combinator methods defined in the MemoryHandles class. For instance, a more direct way to set the elements of a native memory segment is through an indexed memory-access var handle, constructed as follows:

VarHandle intHandle = MemoryHandles.varHandle(int.class,
        ByteOrder.nativeOrder());
VarHandle indexedElementHandle = MemoryHandles.withStride(intHandle, 4);

try (MemorySegment segment = MemorySegment.allocateNative(100)) {
    MemoryAddress base = segment.baseAddress();
    for (int i = 0; i < 25; i++) {
        indexedElementHandle.set(base, (long) i, i);
    }
}
To enhance the expressiveness of the API, and to reduce the need for explicit numeric computations such as those in the above examples, a MemoryLayout can be used to programmatically describe the content of a MemorySegment. For instance, the layout of the native memory segment used in the above examples can be described in the following way:

SequenceLayout intArrayLayout
    = MemoryLayout.ofSequence(25,
        MemoryLayout.ofValueBits(32,
            ByteOrder.nativeOrder()));
This creates a sequence memory layout in which a given element layout (a 32-bit value) is repeated 25 times. Once we have a memory layout, we can get rid of all the manual numeric computation in our code and also simplify the creation of the required memory access var handles, as shown in the following example:

SequenceLayout intArrayLayout
    = MemoryLayout.ofSequence(25,
        MemoryLayout.ofValueBits(32,
            ByteOrder.nativeOrder()));

VarHandle indexedElementHandle
    = intArrayLayout.varHandle(int.class,
        PathElement.sequenceElement());

try (MemorySegment segment = MemorySegment.allocateNative(intArrayLayout)) {
    MemoryAddress base = segment.baseAddress();
    for (int i = 0; i < intArrayLayout.elementCount().getAsLong(); i++) {
        indexedElementHandle.set(base, (long) i, i);
    }
}
In this example, the layout object drives the creation of the memory-access var handle through the creation of a layout path, which is used to select a nested layout from a complex layout expression. The layout object also drives the allocation of the native memory segment, which is based upon size and alignment information derived from the layout. The loop constant in the previous examples (25) has been replaced with the sequence layout's element count.

Dereference operations are only possible on checked memory addresses. Checked addresses are typical in the API, such as the address obtained from a memory segment in the above code (segment.baseAddress()). However, if a memory address is unchecked and does not have any associated segment, then it cannot be dereferenced safely, since the runtime has no way to know the spatial and temporal bounds associated with the address. Some helper functions will be provided to, e.g., attach spatial bounds to an otherwise unchecked address, so as to allow dereference operations. Such operations are, however, unsafe by their very nature, and must be used with care. The API might require such unsafe operations to be enabled by a command-line option at startup.

The Foreign-Memory Access API will be provided as an incubator module named jdk.incubator.foreign, in a package of the same name.

2020-05-14

Hibernate JPA @Fetch Example

@Fetch

The @Fetch annotation is used to specify the Hibernate specific FetchMode (e.g. JOIN, SELECT, SUBSELECT) used for the currently annotated association.

Fetch define the fetching strategy used for the given association.


The @Fetch annotation mapping

Besides the FetchType.LAZY or FetchType.EAGER JPA annotations, you can also use the Hibernate-specific @Fetch annotation that accepts one of the following FetchModes:

SELECT
The association is going to be fetched using a secondary select for each individual entity, collection, or join load. This mode can be used for either FetchType.EAGER or FetchType.LAZY.

JOIN
Use an outer join to load the related entities, collections or joins when using direct fetching. This mode can only be used for FetchType.EAGER.

SUBSELECT
Available for collections only. When accessing a non-initialized collection, this fetch mode will trigger loading all elements of all collections of the same role for all owners associated with the persistence context using a single secondary select.

FetchMode.SELECT

To demonstrate how FetchMode.SELECT works, consider the following entity mapping:

Example : FetchMode.SELECT mapping example

@Entity(name = "Department")
public static class Department {

@Id
private Long id;

@OneToMany(mappedBy = "department", fetch = FetchType.LAZY)
@Fetch(FetchMode.SELECT)
private List<Employee> employees = new ArrayList<>();

//Getters and setters omitted for brevity

}

@Entity(name = "Employee")
public static class Employee {

@Id
@GeneratedValue
private Long id;

@NaturalId
private String username;

@ManyToOne(fetch = FetchType.LAZY)
private Department department;

//Getters and setters omitted for brevity

}
Considering there are multiple Department entities, each one having multiple Employee entities, when executing the following test case, Hibernate fetches every uninitialized Employee collection using a secondary SELECT statement upon accessing the child collection for the first time:

Example : FetchMode.SELECT mapping example
List<Department> departments = entityManager.createQuery(
"select d from Department d", Department.class )
.getResultList();

log.infof( "Fetched %d Departments", departments.size());

for (Department department : departments ) {
assertEquals( 3, department.getEmployees().size() );
}
SELECT
    d.id as id1_0_
FROM
    Department d

-- Fetched 2 Departments

SELECT
    e.department_id as departme3_1_0_,
    e.id as id1_1_0_,
    e.id as id1_1_1_,
    e.department_id as departme3_1_1_,
    e.username as username2_1_1_
FROM
    Employee e
WHERE
    e.department_id = 1

SELECT
    e.department_id as departme3_1_0_,
    e.id as id1_1_0_,
    e.id as id1_1_1_,
    e.department_id as departme3_1_1_,
    e.username as username2_1_1_
FROM
    Employee e
WHERE
    e.department_id = 2
The more Department entities are fetched by the first query, the more secondary SELECT statements are executed to initialize the employees collections. Therefore, FetchMode.SELECT can lead to N + 1 query issue.

FetchMode.SUBSELECT

To demonstrate how FetchMode.SUBSELECT works, we are going to modify the FetchMode.SELECT mapping example to use FetchMode.SUBSELECT:

Example : FetchMode.SUBSELECT mapping example

@OneToMany(mappedBy = "department", fetch = FetchType.LAZY)
@Fetch(FetchMode.SUBSELECT)
private List<Employee> employees = new ArrayList<>();
Now, we are going to fetch all Department entities that match a given filtering predicate and then navigate their employees collections.

Hibernate is going to avoid the N + 1 query issue by generating a single SQL statement to initialize all employees collections for all Department entities that were previously fetched. Instead of using passing all entity identifiers, Hibernate simply reruns the previous query that fetched the Department entities.

Example 424. FetchMode.SUBSELECT mapping example
List<Department> departments = entityManager.createQuery(
"select d " +
"from Department d " +
"where d.name like :token", Department.class )
.setParameter( "token", "Department%" )
.getResultList();

log.infof( "Fetched %d Departments", departments.size());

for (Department department : departments ) {
assertEquals( 3, department.getEmployees().size() );
}
SELECT
    d.id as id1_0_
FROM
    Department d
where
    d.name like 'Department%'

-- Fetched 2 Departments

SELECT
    e.department_id as departme3_1_1_,
    e.id as id1_1_1_,
    e.id as id1_1_0_,
    e.department_id as departme3_1_0_,
    e.username as username2_1_0_
FROM
    Employee e
WHERE
    e.department_id in (
        SELECT
            fetchmodes0_.id
        FROM
            Department fetchmodes0_
        WHERE
            d.name like 'Department%'
    )

FetchMode.JOIN

To demonstrate how FetchMode.JOIN works, we are going to modify the FetchMode.SELECT mapping example to use FetchMode.JOIN instead:

Example : FetchMode.JOIN mapping example

@OneToMany(mappedBy = "department")
@Fetch(FetchMode.JOIN)
private List<Employee> employees = new ArrayList<>();
Now, we are going to fetch one Department and navigate its employees collections.

The reason why we are not using a JPQL query to fetch multiple Department entities is because the FetchMode.JOIN strategy would be overridden by the query fetching directive.

To fetch multiple relationships with a JPQL query, the JOIN FETCH directive must be used instead.

Therefore, FetchMode.JOIN is useful for when entities are fetched directly, via their identifier or natural-id.

Also, the FetchMode.JOIN acts as a FetchType.EAGER strategy. Even if we mark the association as FetchType.LAZY, the FetchMode.JOIN will load the association eagerly.

Hibernate is going to avoid the secondary query by issuing an OUTER JOIN for the employees collection.

Example: FetchMode.JOIN mapping example
Department department = entityManager.find( Department.class, 1L );

log.infof( "Fetched department: %s", department.getId());

assertEquals( 3, department.getEmployees().size() );
SELECT
    d.id as id1_0_0_,
    e.department_id as departme3_1_1_,
    e.id as id1_1_1_,
    e.id as id1_1_2_,
    e.department_id as departme3_1_2_,
    e.username as username2_1_2_
FROM
    Department d
LEFT OUTER JOIN
    Employee e
        on d.id = e.department_id
WHERE
    d.id = 1

-- Fetched department: 1

Hibernate JPA @DynamicUpdate Example

@DynamicUpdate

The @DynamicUpdate annotation is used to specify that the UPDATE SQL statement should be generated whenever an entity is modified.

By default, Hibernate uses a cached UPDATE statement that sets all table columns. When the entity is annotated with the @DynamicUpdate annotation, the PreparedStatement is going to include only the columns whose values have been changed.

For updating, should this entity use dynamic sql generation where only changed columns get referenced in the prepared sql statement?
Note, for re-attachment of detached entities this is not possible without select-before-update being enabled.

Dynamic updates

To enable dynamic updates, you need to annotate the entity with the @DynamicUpdate annotation:

Example : Product entity mapping

@Entity(name = "Product")
@DynamicUpdate
public static class Product {

@Id
private Long id;

@Column
private String name;

@Column
private String description;

@Column(name = "price_cents")
private Integer priceCents;

@Column
private Integer quantity;

//Getters and setters are omitted for brevity

}

Hibernate JPA @DynamicInsert Example

@DynamicInsert

The @DynamicInsert annotation is used to specify that the INSERT SQL statement should be generated whenever an entity is to be persisted.

By default, Hibernate uses a cached INSERT statement that sets all table columns. When the entity is annotated with the @DynamicInsert annotation, the PreparedStatement is going to include only the non-null columns.

DynamicInsert
For inserting, should this entity use dynamic sql generation where only non-null columns get referenced in the prepared sql statement?

Hibernate JPA @DiscriminatorOptions Example

@DiscriminatorOptions

The @DiscriminatorOptions annotation is used to provide the force and insert Discriminator properties.

DiscriminatorOptions Optional annotation to express Hibernate specific discriminator properties.

Discriminator
The discriminator column contains marker values that tell the persistence layer what subclass to instantiate for a particular row. Hibernate Core supports the following restricted set of types as discriminator column: String, char, int, byte, short, boolean(including yes_no, true_false).

Use the @DiscriminatorColumn to define the discriminator column as well as the discriminator type.

The enum DiscriminatorType used in javax.persistence.DiscriminatorColumn only contains the values STRING, CHAR and INTEGER which means that not all Hibernate supported types are available via the @DiscriminatorColumn annotation. You can also use @DiscriminatorFormula to express in SQL a virtual discriminator column. This is particularly useful when the discriminator value can be extracted from one or more columns of the table. Both @DiscriminatorColumn and @DiscriminatorFormula are to be set on the root entity (once per persisted hierarchy).

@org.hibernate.annotations.DiscriminatorOptions allows to optionally specify Hibernate-specific discriminator options which are not standardized in JPA. The available options are force and insert.

The force attribute is useful if the table contains rows with extra discriminator values that are not mapped to a persistent class. This could, for example, occur when working with a legacy database. If force is set to true, Hibernate will specify the allowed discriminator values in the SELECT query even when retrieving all instances of the root class.

The second option, insert, tells Hibernate whether or not to include the discriminator column in SQL INSERTs. Usually, the column should be part of the INSERT statement, but if your discriminator column is also part of a mapped composite identifier you have to set this option to false.

There used to be a @org.hibernate.annotations.ForceDiscriminator annotation which was deprecated in version 3.6 and later removed. Use @DiscriminatorOptions instead.

Hibernate JPA @DiscriminatorFormula Example

@DiscriminatorFormula

The @DiscriminatorFormula annotation is used to specify a Hibernate @Formula to resolve the inheritance discriminator value.

DiscriminatorFormula Used to apply a Hibernate formula (derived value) as the inheritance discriminator "column". Used in place of the JPA DiscriminatorColumn when a formula is wanted. To be placed on the root entity.

Single Table discriminator formula
@Entity(name = "Account")
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorFormula(
"case when debitKey is not null " +
"then 'Debit' " +
"else ( " +
"   case when creditKey is not null " +
"   then 'Credit' " +
"   else 'Unknown' " +
"   end ) " +
"end "
)
public static class Account {

@Id
private Long id;

private String owner;

private BigDecimal balance;

private BigDecimal interestRate;

//Getters and setters are omitted for brevity

}

@Entity(name = "DebitAccount")
@DiscriminatorValue(value = "Debit")
public static class DebitAccount extends Account {

private String debitKey;

private BigDecimal overdraftFee;

//Getters and setters are omitted for brevity

}

@Entity(name = "CreditAccount")
@DiscriminatorValue(value = "Credit")
public static class CreditAccount extends Account {

private String creditKey;

private BigDecimal creditLimit;

//Getters and setters are omitted for brevity

}

Hibernate JPA @CreationTimestamp Example

@CreationTimestamp

The @CreationTimestamp annotation is used to specify that the currently annotated temporal type must be initialized with the current JVM timestamp value.

Marks a property as the creation timestamp of the containing entity. The property value will be set to the current VM date exactly once when saving the owning entity for the first time.
Supported property types:

Date
Calendar
Date
Time
Timestamp
Instant
LocalDate
LocalDateTime
LocalTime
MonthDay
OffsetDateTime
OffsetTime
Year
YearMonth
ZonedDateTime

@CreationTimestamp annotation

The @CreationTimestamp annotation instructs Hibernate to set the annotated entity attribute with the current timestamp value of the JVM when the entity is being persisted.

The supported property types are:

java.util.Date

java.util.Calendar

java.sql.Date

java.sql.Time

java.sql.Timestamp

Example : @CreationTimestamp mapping example

@Entity(name = "Event")
public static class Event {

@Id
@GeneratedValue
private Long id;

@Column(name = "`timestamp`")
@CreationTimestamp
private Date timestamp;

//Constructors, getters, and setters are omitted for brevity
}

Hibernate JPA @ColumnTransformers Example

@ColumnTransformers
The @ColumnTransformers annotation iis used to group multiple @ColumnTransformer annotations.

Hibernate JPA @ColumnTransformer Example

@ColumnTransformer

The @ColumnTransformer annotation is used to customize how a given column value is read from or written into the database.

ColumnTransformer
Custom SQL expression used to read the value from and write a value to a column. Use for direct object loading/saving as well as queries. The write expression must contain exactly one '?' placeholder for the value. For example: read="decrypt(credit_card_num)" write="encrypt(?)"

@ColumnTransformer example
@Entity(name = "Employee")
public static class Employee {

@Id
private Long id;

@NaturalId
private String username;

@Column(name = "pswd")
@ColumnTransformer(
read = "decrypt( 'AES', '00', pswd  )",
write = "encrypt('AES', '00', ?)"
)
private String password;

private int accessLevel;

@ManyToOne(fetch = FetchType.LAZY)
private Department department;

@ManyToMany(mappedBy = "employees")
private List<Project> projects = new ArrayList<>();

//Getters and setters omitted for brevity
}
If a property uses more than one column, you must use the forColumn attribute to specify which column the @ColumnTransformer read and write expressions are targeting.

Example : @ColumnTransformer forColumn attribute usage
@Entity(name = "Savings")
public static class Savings {

@Id
private Long id;

@Embedded
@ColumnTransformer(
forColumn = "money",
read = "money / 100",
write = "? * 100"
)
private MonetaryAmount wallet;

//Getters and setters omitted for brevity

}

2020-05-05

Intellij shortcuts keys : Recompile 'class'

Intellij shortcuts keys : Recompile 'class name'

Compile a single file or class

keys - ( Ctrl+Shift+F9 )

2020-05-03

For-each takes precedence over traditional for or while

For-each takes precedence over traditional for or while
Before Java 1.5, a collection must be looped.

for (Iterator i = c.iterator(); i.hasNext(); ) {
  doSomething((Element) i.next()); // (No generics before 1.5)
}
The loop array is like this

for (int i = 0; i < a.length; i++) {
  doSomething(a[i]);
}
These practices are better than while, but there are still too many places you may change to your index variable or iterator

So the recommended way in this book is as follows

for (Element e : elements) {
  doSomething(e);
}
The colon stands for each element e in the set

Not only is it the same in performance (even better), it wo n’t give you a chance to write bugs

The author also gave an example of wanting to print each card of a deck

enum Suit { CLUB, DIAMOND, HEART, SPADE }
enum Rank { ACE, DEUCE, THREE, FOUR, FIVE, SIX, SEVEN, EIGHT,
NINE, TEN, JACK, QUEEN, KING }
Collection<Suit> suits = Arrays.asList(Suit.values());
Collection<Rank> ranks = Arrays.asList(Rank.values());
List<Card> deck = new ArrayList<Card>();
for (Iterator<Suit> i = suits.iterator(); i.hasNext(); ){
  for (Iterator<Rank> j = ranks.iterator(); j.hasNext(); ){
    deck.add(new Card(i.next(), j.next()));
  }
}
The reason is also very simple is that the two iterators learn the method together to temporarily store the external value

It ’s really ugly and easy to have bugs. If you use for-each

for (Suit suit : suits){
  for (Rank rank : ranks){
    deck.add(new Card(suit, rank));
  }
}
Simply elegant

Iterable

For what kind of category can you use for-each? The answer is that the category must implement Iterable

public interface Iterable<E> {
  //Returns an iterator over the elements in this iterable
  Iterator<E> iterator()
}

Exception

Basically, as long as the category you want to loop has Iterable implemented, you should use for-each

1. Same performance

2. Prevent bugs

3. Simple program

But there are three situations where you cannot use for-each

1. Filter or delete: To delete something, you must use an iterator (because you may need to call remove ())

2. Conversion: is to change the value for-each can only be used to read elements

3. Parallel iteration: when the bug just turned into a feature, when you really want the index or iterator of the inner and outer loops to advance together

Introduction to JAVA virtual machine

In 1991, Sun MicroSystems (acquired by Oracle in 2010) wanted to develop new technologies that could control smart appliances (microwave oven TV) or allow smart appliances to communicate with each other. Called Green Project

The difficulty of this project is that the target is home appliances. There are many restrictions as follows

1. Less memory for embedded systems: So programs must consume as little memory as possible

2. Cross-platform: we don't want a program to be compiled after each device is compiled to execute. We want to compile once and run everywhere

3. Security: The process of transferring programs between devices must be ensured that the device will not be damaged even if it is maliciously modified

4. Multi-thread support

Translator

Initially, the team planned to use C ++, but C ++ could not meet the cross-platform requirements. For example, Hello.exe that you compiled in the Linux system cannot be directly executed by Window

So although C ++ was very popular at the time, it was eliminated first because it did not meet the requirements

Interpreter

The interpreter allows your high-level language to execute every line without going through the compiler. For example, Python and Ruby, which are often heard, use the interpreter.

So as long as each device has a translator for my programming language, it can be cross-platform.

But the well-known interpreter is very slow, very slow, and very slow because he only executes one line every time he runs a line. Some compilers do not optimize it.

Is there a way to cross-platform and fast?


Compiler + interpreter

The Java bytecode is stored in Hello.class. Of course, we will have another chapter to introduce the format of this bytecode. Now you can think of it as the machine language bytecode in C

The bytecode is compiled, compressed, and optimized so that it runs much faster, and then throws the bytecode to the JVM specific to each platform. He will do the same processing with different hardware and operating systems result

This is how JAVA achieves cross-platform, but when this method came up at the time, the language was not called Java, but Oak.

Follow-up to the Green Project

After the invention of Oak, I originally wanted to use this language for TVs, telephones and other home appliances. But the final conclusion is that this language is too strong and will give the TV users too much authority. At that time, the demand for smart home appliances was not high at all. Dead belly

In 1994, the Green Project team discovered that the central idea of ​​the popular world wide web is very close to their initial idea of ​​home appliances. Data was passed between devices, so they renamed their newly invented language. He wrote a small World Wide Web browser for Java, HotJava, which announced the programming language that has changed the software ecosystem for 30 years at SunWorld in 1995

The more important name is James Gosling. He said that the father of Java is a member of the Green Project.

After the advent of JAVA, everyone found that the style of Java is very similar to C ++, but it is an object-oriented language. However, Java does not have the cumbersome pointer in C ++. It is changed to reference and the multiple inheritance is removed. Features running everywhere

In 1998, JDK1.2 was launched and renamed as Java 2 Platform. Not only that, but also quite ambitious to push three versions.


  • J2SE Standard Edition Offensive Desktop
  • J2EE Enterprise Edition Offensive Server
  • J2ME mini version offensive set-top box mobile phone and PDA
  • What a vision, basically the goal is to succeed in one direction
  • It is conceivable that the most successful is that J2EE rides on the Internet and all the features are in line with the server requirements of the Internet of Things
  • Cross-platform security fast and easy to use (object-oriented)


The number of developers has expanded rapidly. Some giants that support the C language have expressed their interest. Among them, IBM is among them the overlord quickly launched the web server WebSphere and then launched a giant cannon IDE-Eclipse plus its own strong hardware directly one-stop product Line (hardware + software + server) IBM's troika at that time was exactly like Google's troika (MapReduce, BigTable, GFS) ten years ago, with direct revenue skyrocketing

Later, a lot of strangers who want to occupy the server are more famous. Ruby on rails and PHP are basically not a big threat to Java.

In 2006, Hadoop was born, and the bottom layer also used Java to complete map / reduce. Since then, the rise of massive data computing has also helped Java.

Android came to the surface in 2008, and Google's strong support made Java once again reach an unprecedented peak

The introduction of the JVM in 2020 that every programmer must understand makes everyone aware of the unknown field of Java's underlying layer, lowering the threshold of Java's 30 years of learning and unveiling the mystery of the JVM for many years.

Why is the JAVA virtual machine called the JAVA virtual machine
I read a lot of books about virtual machines, and almost no one answered this question. I think it is very helpful to understand the answer to this question first.

First of all, we must first know what a virtual machine is. As the name implies, a virtual machine is not a physical machine. He uses software to simulate (implement) a fully functional hardware. The thing that simulates hardware is called a virtual machine. A virtual machine can execute programs like a physical machine.

Using the example of daily life, even if you buy a Window computer, you often hear that someone uses a virtual machine to install the operating system of Linux. After installation, you can execute Linux program instructions in the virtual machine as if you were using Linux. You do n’t need to use it at all. What is the actual underlying hardware

Shop the stems a little bit obviously. Seeing the above paragraph, you probably know what to take in the next paragraph! This concept is basically the same as our cross-platform, so JAVA also uses the same concept as the virtual machine. No matter what hardware and operating system your underlying layer is, as long as you have a Java virtual machine, you can execute java bytecode in the Java virtual machine. No matter what the actual underlying hardware is

So you write java, you compile java not to make your program run on a specific Linux or a Window, you are to make your program run on the JVM

It turns out that virtual machines are so convenient

But in order to achieve this, writing a JVM that executes everywhere must have very strict specifications so that each JVM on a different machine can get the same bytecode and run the same result.

Three concepts of JAVA virtual machine
Specification implementation and runtime examples

Specification:
Everyone who writes Java programs no longer needs to consider on which machine or operating system they only need to write the JVM. To achieve this goal, there are many strict requirements that must comply with the Java Virtual Machine Specification released by Oracle. All is to implement the JVM The rules that people must abide by As long as you follow the rules, anyone can implement their own virtual machine

Implementation:
There are norms and there are implementations. The most famous ones are the following two dear friends

Oracle HotSpot JVM

IBM's JVM

(You can see who implemented your virtual machine by typing java -version on your terminal)

Runtime instance:
There is a runtime instance when it is implemented. When you execute java Hello, an instance of a JVM is created in the memory and then the Hello file is loaded.

Note that each runtime instance will only run one java application, so if you enter java Hello five times in the terminal, five runtime instances will be generated to run five programs.

JVM architecture
The following figure is all the main components that make up a JVM

Prefer try-with-resources to try-finally

The Java libraries include many resources that must be closed manually by invoking a close method. Examples include InputStream, OutputStream, and java.sql.Connection. Closing resources is often overlooked by clients, with predictably dire performance consequences. While many of these resources use finalizers as a safety net, finalizers don’t work very well (Item 8). Historically, a try-finally statement was the best way to guarantee that a resource would be closed properly, even in the face of an exception or return:
// try-finally - No longer the best way to close resources!
static String firstLineOfFile(String path) throws IOException {
BufferedReader br = new BufferedReader(new FileReader(path));
try {
return br.readLine();
} finally {
br.close();
}
}
This may not look bad, but it gets worse when you add a second resource:
// try-finally is ugly when used with more than one resource!
static void copy(String src, String dst) throws IOException {
InputStream in = new FileInputStream(src);
try {
OutputStream out = new FileOutputStream(dst);
try {
byte[] buf = new byte[BUFFER_SIZE];
int n;
while ((n = in.read(buf)) >= 0)
out.write(buf, 0, n);
} finally {
out.close();
}
} finally {
in.close();
}
}
It may be hard to believe, but even good programmers got this wrong most of the time.
Even the correct code for closing resources with try-finally statements, as illustrated in the previous two code examples, has a subtle deficiency. The code in both the try block and the finally block is capable of throwing exceptions. For example, in the firstLineOfFile method, the call to readLine could throw an exception due to a failure in the underlying physical device, and the call to close could then fail for the same reason. Under these circumstances, the second exception completely obliterates the first one. There is no record of the first exception in the exception stack trace, which can greatly complicate debugging in real systems—usually it’s the first exception that you want to see in order to diagnose the problem. While it is possible to write code to suppress the second exception in favor of the first, virtually no one did because it’s just too verbose.

All of these problems were solved in one fell swoop when Java 7 introduced the try-with-resources statement [JLS, 14.20.3]. To be usable with this construct,
a resource must implement the AutoCloseable interface, which consists of a single void-returning close method. Many classes and interfaces in the Java
libraries and in third-party libraries now implement or extend AutoCloseable. If you write a class that represents a resource that must be closed, your class should implement AutoCloseable too.

Here’s how our first example looks using try-with-resources:
// try-with-resources - the the best way to close resources!
static String firstLineOfFile(String path) throws IOException {
try (BufferedReader br = new BufferedReader(
new FileReader(path))) {
return br.readLine();
}
}
And here’s how our second example looks using try-with-resources:
// try-with-resources on multiple resources - short and sweet
static void copy(String src, String dst) throws IOException {
try (InputStream in = new FileInputStream(src);
OutputStream out = new FileOutputStream(dst)) {
byte[] buf = new byte[BUFFER_SIZE];
int n;
while ((n = in.read(buf)) >= 0)
out.write(buf, 0, n);
}
}

Not only are the try-with-resources versions shorter and more readable than the originals, but they provide far better diagnostics. Consider the firstLineOfFile method. If exceptions are thrown by both the readLine call and the (invisible) close, the latter exception is suppressed in favor of the former. In fact, multiple exceptions may be suppressed in order to preserve the exception that you actually want to see. These suppressed exceptions are not merely discarded; they are printed in the stack trace with a notation saying that they were suppressed. You can also access them programmatically with the getSuppressed method, which was added to Throwable in Java 7.

You can put catch clauses on try-with-resources statements, just as you can on regular try-finally statements. This allows you to handle exceptions without
sullying your code with another layer of nesting. As a slightly contrived example, here’s a version our firstLineOfFile method that does not throw exceptions, but takes a default value to return if it can’t open the file or read from it:
// try-with-resources with a catch clause
static String firstLineOfFile(String path, String defaultVal) {
try (BufferedReader br = new BufferedReader(
new FileReader(path))) {
return br.readLine();
} catch (IOException e) {
return defaultVal;
}
}
The lesson is clear: Always use try-with-resources in preference to tryfinally when working with resources that must be closed. The resulting code is shorter and clearer, and the exceptions that it generates are more useful. The trywith- resources statement makes it easy to write correct code using resources that must be closed, which was practically impossible using try-finally.

Always override hashCode when you override equals

You must override hashCode in every class that overrides equals. If you fail to do so, your class will violate the general contract for hashCode, which will
prevent it from functioning properly in collections such as HashMap and HashSet.

Here is the contract, adapted from the Object specification :
• When the hashCode method is invoked on an object repeatedly during an execution of an application, it must consistently return the same value,
provided no information used in equals comparisons is modified. This value need not remain consistent from one execution of an application to another.
• If two objects are equal according to the equals(Object) method, then calling hashCode on the two objects must produce the same integer result.
• If two objects are unequal according to the equals(Object) method, it is not required that calling hashCode on each of the objects must produce distinct results. However, the programmer should be aware that producing distinct results for unequal objects may improve the performance of hash tables. The key provision that is violated when you fail to override hashCode is the second one: equal objects must have equal hash codes. Two distinct instances may be logically equal according to a class’s equals method, but to Object’s hashCode method, they’re just two objects with nothing much in common. Therefore, Object’s hashCode method returns two seemingly random numbers instead of two equal numbers as required by the contract.

For example, suppose you attempt to use instances of the PhoneNumber class
from Item 10 as keys in a HashMap:
Map<PhoneNumber, String> m = new HashMap<>();
m.put(new PhoneNumber(707, 867, 5309), "Jenny");
At this point, you might expect m.get(new PhoneNumber(707, 867, 5309)) to return "Jenny", but instead, it returns null. Notice that two PhoneNumber instances are involved: one is used for insertion into the HashMap, and a second, equal instance is used for (attempted) retrieval. The PhoneNumber class’s failure to
override hashCode causes the two equal instances to have unequal hash codes, in violation of the hashCode contract. Therefore, the get method is likely to look for the phone number in a different hash bucket from the one in which it was stored by the put method. Even if the two instances happen to hash to the same bucket, the get method will almost certainly return null, because HashMap has an optimization that caches the hash code associated with each entry and doesn’t bother checking for object equality if the hash codes don’t match.
Fixing this problem is as simple as writing a proper hashCode method for PhoneNumber. So what should a hashCode method look like? It’s trivial to write a
bad one. This one, for example, is always legal but should never be used:
// The worst possible legal hashCode implementation - never use!
@Override public int hashCode() { return 42; }
It’s legal because it ensures that equal objects have the same hash code. It’s atrocious because it ensures that every object has the same hash code. Therefore, every object hashes to the same bucket, and hash tables degenerate to linked lists. Programs that should run in linear time instead run in quadratic time. For large hash tables, this is the difference between working and not working.
A good hash function tends to produce unequal hash codes for unequal instances. This is exactly what is meant by the third part of the hashCode contract.
Ideally, a hash function should distribute any reasonable collection of unequal instances uniformly across all int values. Achieving this ideal can be difficult. Luckily it’s not too hard to achieve a fair approximation. Here is a simple recipe:
1. Declare an int variable named result, and initialize it to the hash code c for the first significant field in your object, as computed in step
2.a. (Recall from Item 10 that a significant field is a field that affects equals comparisons.)
2. For every remaining significant field f in your object, do the following:
a. Compute an int hash code c for the field:
i. If the field is of a primitive type, compute Type.hashCode(f), where Type is the boxed primitive class corresponding to f’s type.
ii. If the field is an object reference and this class’s equals method compares the field by recursively invoking equals, recursively invoke hashCode on the field. If a more complex comparison is required, compute a “canonical representation” for this field and invoke hashCode on the canonical representation. If the value of the field is null, use 0 (or some other constant, but 0 is traditional).
iii. If the field is an array, treat it as if each significant element were a separate field. That is, compute a hash code for each significant element by applying these rules recursively, and combine the values per step
2.b. If the array has no significant elements, use a constant, preferably not 0. If all elements are significant, use Arrays.hashCode.
b. Combine the hash code c computed in step 2.a into result as follows:
result = 31 * result + c;
3. Return result.

When you are finished writing the hashCode method, ask yourself whether equal instances have equal hash codes. Write unit tests to verify your intuition
(unless you used AutoValue to generate your equals and hashCode methods, in which case you can safely omit these tests). If equal instances have unequal hash
codes, figure out why and fix the problem. You may exclude derived fields from the hash code computation. In other
words, you may ignore any field whose value can be computed from fields included in the computation. You must exclude any fields that are not used in equals comparisons, or you risk violating the second provision of the hashCode contract. The multiplication in step 2.b makes the result depend on the order of the fields, yielding a much better hash function if the class has multiple similar fields. For example, if the multiplication were omitted from a String hash function, all anagrams would have identical hash codes. The value 31 was chosen because it is an odd prime. If it were even and the multiplication overflowed, information would be lost, because multiplication by 2 is equivalent to shifting. The advantage of using a prime is less clear, but it is traditional. A nice property of 31 is that the multiplication can be replaced by a shift and a subtraction for better performance on some architectures: 31 * i == (i << 5) - i. Modern VMs do this sort of optimization automatically.
Let’s apply the previous recipe to the PhoneNumber class:
// Typical hashCode method
@Override public int hashCode() {
int result = Short.hashCode(areaCode);
result = 31 * result + Short.hashCode(prefix);
result = 31 * result + Short.hashCode(lineNum);
return result;
}
Because this method returns the result of a simple deterministic computation whose only inputs are the three significant fields in a PhoneNumber instance, it is clear that equal PhoneNumber instances have equal hash codes. This method is, in fact, a perfectly good hashCode implementation for PhoneNumber, on par with those in the Java platform libraries. It is simple, is reasonably fast, and does a reasonable job of dispersing unequal phone numbers into different hash buckets. While the recipe in this item yields reasonably good hash functions, they are not state-of-the-art. They are comparable in quality to the hash functions found in the Java platform libraries’ value types and are adequate for most uses. If you have a bona fide need for hash functions less likely to produce collisions, see Guava’s com.google.common.hash.Hashing [Guava].

The Objects class has a static method that takes an arbitrary number of objects and returns a hash code for them. This method, named hash, lets you write
one-line hashCode methods whose quality is comparable to those written according to the recipe in this item. Unfortunately, they run more slowly because they entail array creation to pass a variable number of arguments, as well as boxing and unboxing if any of the arguments are of primitive type. This style of hash function is recommended for use only in situations where performance is not critical.
Here is a hash function for PhoneNumber written using this technique:
// One-line hashCode method - mediocre performance
@Override public int hashCode() {
return Objects.hash(lineNum, prefix, areaCode);
}
If a class is immutable and the cost of computing the hash code is significant, you might consider caching the hash code in the object rather than recalculating it each time it is requested. If you believe that most objects of this type will be used as hash keys, then you should calculate the hash code when the instance is created. Otherwise, you might choose to lazily initialize the hash code the first time hash-Code is invoked. Some care is required to ensure that the class remains thread-safe in the presence of a lazily initialized field (Item 83). Our PhoneNumber class does not merit this treatment, but just to show you how it’s done, here it is. Note that the initial value for the hashCode field (in this case, 0) should not be the hash code of
a commonly created instance:
// hashCode method with lazily initialized cached hash code
private int hashCode; // Automatically initialized to 0
@Override public int hashCode() {
int result = hashCode;
if (result == 0) {
result = Short.hashCode(areaCode);
result = 31 * result + Short.hashCode(prefix);
result = 31 * result + Short.hashCode(lineNum);
hashCode = result;
}
return result;
}
Do not be tempted to exclude significant fields from the hash code computation to improve performance. While the resulting hash function may run
faster, its poor quality may degrade hash tables’ performance to the point where they become unusable. In particular, the hash function may be confronted with a large collection of instances that differ mainly in regions you’ve chosen to ignore.
If this happens, the hash function will map all these instances to a few hash codes, and programs that should run in linear time will instead run in quadratic time. This is not just a theoretical problem. Prior to Java 2, the String hash function used at most sixteen characters evenly spaced throughout the string, starting with the first character. For large collections of hierarchical names, such as URLs, this function displayed exactly the pathological behavior described earlier. Don’t provide a detailed specification for the value returned by hashCode, so clients can’t reasonably depend on it; this gives you the flexibility to change it. Many classes in the Java libraries, such as String and Integer, specify the exact value returned by their hashCode method as a function of the instance value. This is not a good idea but a mistake that we’re forced to live with: It impedes the ability to improve the hash function in future releases. If you leave the details unspecified and a flaw is found in the hash function or a better hash function is discovered, you can change it in a subsequent release. In summary, you must override hashCode every time you override equals, or your program will not run correctly. Your hashCode method must obey the general contract specified in Object and must do a reasonable job assigning unequal hash codes to unequal instances. This is easy to achieve, if slightly tedious, using the recipe on page 51. As mentioned in Item 10, the AutoValue framework provides a fine alternative to writing equals and hashCode methods manually, and IDEs also provide some of this functionality.