2020-07-28

Spring supplier Example

Non Spring Developers


Let us assume for a second that you are not a Spring developer and not familiar with Spring Integration which already provides abstractions for ROME. In that case, we can certainly use ROME directly to produce feed records. For example, this is a valid Supplier for this scenario.

public Supplier<SyndEntry> feedSupplier()
{
return () -> {
//Use the ROME framework directly to produce syndicated entries.
}
}

The benefit here is that we can develop the supplier without any knowledge of Spring, and it can be deployed to a serverless environment directly, using the abstractions provided by that environment or by relying on a framework like Spring Cloud Function.

This essentially means that if you are a Java developer without much Spring Framework skills, you can still write the functions using just the interfaces defined in the java.util.function package such as Function, Supplier and Consumer, by providing the business logic. 

Spring Developers


Add the following Spring Integration Feed adapter dependency in the project. This brings the feed adapter from Spring Integration as well as any other transitive dependencies.

<dependency>
   <groupId>org.springframework.integration</groupId>
   <artifactId>spring-integration-feed</artifactId>
</dependency>
Adding basic configuration properties
Now that we have our core dependency in, let’s start writing some code. Since the functions are expected to be used in a Spring Boot context, we need to create a ConfigurationProperties class to drive the configuration for the supplier function. Here is what it might look like.

package org.springframework.cloud.fn.supplier.feed;

@ConfigurationProperties("feed.supplier")
public class FeedSupplierProperties {

/**
* Key used in metadata store to avoid duplicate read from the feed
*/
private String metadataKey;

/**
* Feed url.
*/
private URL feedUrl;

// rest is omitted
}
As we can see, we use the prefix of feed.supplier on all the properties.

Adding the Configuration class
Next, let’s create a Spring based configuration class where we provide all the necessary components. We will build it incrementally. Below is the basic structure of the class.

package org.springframework.cloud.fn.supplier.feed;
...
@Configuration
@EnableConfigurationProperties(FeedSupplierProperties.class)
public class FeedSupplierConfiguration {

}
Add these fields to the class.

private final ConcurrentMetadataStore metadataStore;

private final Resource resource;

private final FeedSupplierProperties feedSuppplierProperties;
Quick note on these fields. Feed adapter in Spring Integration provides a capability for not reading the same entries that we read from a feed already. The metadataKey property we defined above is used for this purpose. The way it does is by using a metadata store. There are various metadata stores available for popular databases. Include the following dependency for an in-memory simple metadata store.

<dependency>
   <groupId>org.springframework.cloud.fn</groupId>
   <artifactId>metadata-store-common</artifactId>
   <version>${project.version}</version>
</dependency>
Note that this requirement is specific to this supplier and not all suppliers may need it.

Users can provide a Resource bean for reading the feed if there is no HTTP (or HTTPS) based url available (which we can set through the configuration property).

Let’s add a constructor to use these fields.

FeedSupplierConfiguration(FeedSupplierProperties feedSupplierProperties,
                   ConcurrentMetadataStore metadataStore,
                   @Nullable Resource resource) {
  this.feedSuppplierProperties = feedSupplierProperties;
  this.metadataStore = metadataStore;
  this.resource = resource;
}
Resource is nullable because most often we can simply pass the URL string as a configuration property and not provide a Resource bean.

The Spring Integration Feed adapter provides FeedEntryMessageSource which is a MessageSource implementation. We will use this message source in our supplier. Let’s set it up as a Spring Bean.The code below is pretty self explanatory.

@Bean
public FeedEntryMessageSource feedEntryMessageSource() {
  final FeedEntryMessageSource feedEntryMessageSource = this.resource == null ? new FeedEntryMessageSource(this.feedSuppplierProperties.getFeedUrl(),
        this.feedSuppplierProperties.getMetadataKey()) :
       ...
  return feedEntryMessageSource;
}

Non Reactive Supplier

Now that we have the MessageSource bean ready, it is relatively trivial to write a simple Supplier and invoke it programmatically by calling the get method of the supplier. Here it is.

@Bean
public Supplier<Message<SyndEntry>> feedSupplier() {
  return () -> feedEntryMessageSource().receive();
}
We can inject this Supplier bean into our application and call the get method programmatically. When this Supplier is used in a Spring Cloud Stream application (as we will see later), it will use a default poller provided by Spring Cloud Stream that will trigger the supplier every second by default. This schedule can be changed in the poller.


Reactive Supplier

The non reactive polling solution looks alright, but we might ask, how about if I don’t want to poll explicitly every so often, but I want the data as soon as it is available in the message source in a streaming manner? Well, we have a solution for that - develop a reactive supplier that delivers the data received as soon as it becomes available. Let’s see the details.

Here again, Spring Integration provides some abstractions we can use to convert our FeedEntryMessageSource into a reactive publisher as shown below.

@Bean
public Supplier<Flux<Message<SyndEntry>>> feedSupplier() {
  return () -> IntegrationReactiveUtils.messageSourceToFlux(feedEntryMessageSource());
}
You may notice that this supplier returns a Flux<Message<SyndEntry>> as opposed to Message<SyndEntry> as shown in the initial non-reactive supplier in which we were relying on programmatic invocation of the supplier or some other polling mechanism.

Other Reactive Solutions
Ok, it was nice that we had a MessageSource coming from Spring Integration and we could use that utility method for converting it to a Flux. What if there was no such MessageSource and we had to hand craft the basic retrieval of the data for the systems for which we want to write a reactive style supplier? For those cases, we can use the various facilities provided by Project Reactor and then programmatically feed the data to them. The bottom line is that, when we write a reactive streaming supplier, we have to return the data as a Flux.

Unit Testing the Reactive Supplier
Let’s add a unit test for this reactive supplier. We can use the atom feed example described in RFC 4287 - The Atom Syndication Format as our test data. Include it in src/test/resources.

Here is the test class.

@SpringBootTest(properties = {"feed.supplier.feedUrl=classpath:atom.xml",
     "feed.supplier.metadataKey=feedTest" })
@DirtiesContext
public class FeedSupplierTests {

  @Autowired
  Supplier<Flux<Message<SyndEntry>>> feedSupplier;

  @Test
  public void testFromSampleRssFile() {
     final Flux<Message<SyndEntry>> messageFlux = feedSupplier.get();

     StepVerifier.create(messageFlux)
           .assertNext((message) -> {
              assertThat(message.getPayload().getTitle().trim()).isEqualTo("Atom draft-07 snapshot");
              assertThat(message.getPayload().getContents().size()).isEqualTo(1);
              assertThat(message.getPayload().getContents().get(0).getValue().contains("The Atom draft is finished.")).isTrue();
           })
           .thenCancel()
           .verify();
  }

  @SpringBootApplication
  static class FeedSupplierTestApplication {

  }

}
Adding the Supplier function to the maven BOM for functions
The functions project aggregates all the available functions in a maven BOM. Add the feed-supplier to this BOM. This is primarily needed, if you are generating the Spring Cloud Stream application based on this function.

Generating Spring Cloud Stream Applications from the Supplier
At this point in the process, we can submit a pull request to the repository with our supplier, but if we want to generate Spring Cloud Stream binder based applications from the supplier, keep on reading. Once generated, these applications can be run standalone or as part of a data orchestration pipeline in Spring Cloud Data Flow.

Go ahead and create a new module called feed-source under applications/source. As we have mentioned in the previous blogs, java.util.function.Supplier is mapped as a Spring Cloud Stream Source.

We don’t need to add any custom code on top of our feed supplier as it can be used as it is. However, now that we are talking about Spring Cloud Stream application, we need to use the test binder with the supplier function to see how the supplier works with Spring Cloud Stream.

We can use one of the existing sources as a template to guide us through the process. We can even copy one of them and make changes incrementally.

All the apps use the parent pom stream-applications-core which brings all the necessary test dependencies, like the test binder mentioned above. It also provides the infrastructure for the application generator plugin that is responsible for generating the binder based applications.

One point that we would like to emphasize is that unless the application module contains custom code, this module simply becomes an application generator that generates the binder based applications. In other words, we won’t add a class with @SpringBootApplicaiton to it, rather it is generated for us.

Testing the supplier with the test binder
Add the following dependency for testing with test binder:

<dependencies>
   <dependency>
       <groupId>org.springframework.cloud.fn</groupId>
       <artifactId>feed-supplier</artifactId>
       <scope>test</scope>
   </dependency>
</dependencies>
Now we can add a test to verify that the feed-supplier works with the test binder in Spring Cloud Stream. Basically, we need to ensure that the supplier produces the data through the test binder and it is delivered to the destination on the test binder.

Here is the general idea behind the test:

public class FeedSourceTests {

  @Test
  public void testFileSource() throws Exception {
     try (ConfigurableApplicationContext context = new SpringApplicationBuilder(
           TestChannelBinderConfiguration
                 .getCompleteConfiguration(FeedSourceTestApplication.class))
           .web(WebApplicationType.NONE)
           .run("--spring.cloud.function.definition=feedSupplier", "--feed.supplier.feedUrl=classpath:atom.xml", "--feed.supplier.metadataKey=feedTest")) {

        OutputDestination target = context.getBean(OutputDestination.class);
        Message<byte[]> sourceMessage = target.receive(10000);
        Object title = JsonPath.parse(new String(sourceMessage.getPayload())).read("$.title");
        assertThat(title).isEqualTo("Atom draft-07 snapshot");
     }
  }

  @SpringBootApplication
  @Import(FeedSupplierConfiguration.class)
  public static class FeedSourceTestApplication {

  }
}
The test is largely similar to the unit test we added for the supplier, but with a big difference. In the supplier, we were directly invoking it and verifying the data produced. Here, we are not invoking the supplier directly, but the binding mechanism in Spring Cloud Stream does that for us automatically. We are receiving the data from the outbound destination and then verify that.

Once the test passes, it is time for us to generate the applications.

Generating the binder based applications
By default, the plugin generates applications for both Kafka and Rabbit binders in Spring Cloud Stream. This is configured in the parent pom in stream-applications-core. If we have a need to customize the generation for different binders, we need to make those changes there. Below is the configuration for the application generator plugin.

<plugin>
   <groupId>org.springframework.cloud.stream.app.plugin</groupId>
   <artifactId>spring-cloud-stream-app-maven-plugin</artifactId>
   <configuration>
       <generatedApp>
           <name>feed</name>
           <type>source</type>
           <version>${project.version}</version>
           <configClass>org.springframework.cloud.fn.supplier.feed.FeedSupplierConfiguration.class</configClass>
       </generatedApp>
       <dependencies>
           <dependency>
               <groupId>org.springframework.cloud.fn</groupId>
               <artifactId>feed-supplier</artifactId>
           </dependency>
           <dependency>
               <groupId>org.springframework.cloud.stream.app</groupId>
               <artifactId>stream-applications-composite-function-support</artifactId>
               <version>${stream-apps-core.version}</version>
           </dependency>
       </dependencies>
   </configuration>
</plugin>
Let’s quickly go over some details here. We are requesting the plugin to create an application with the name feed-source and want it to use our Supplier developed above as the main configuration class. Within the dependencies section for the plugin, we also need to add any dependencies that the app needs, feed-supplier in this case. We need to add all our processor functions in all the generated source applications. This is because we can compose the source with other processors without requiring them to run as individual microservices as we have seen in the previous blog. More details on function composition with the processors can be found here as well. This is why we are adding the dependency, stream-applications-composite-function-support in the dependencies section in the plugin.

Build the application module and we will see the binder based apps in the apps folder. They will be named as feed-source-kafka and feed-source-rabbit. We can go to either of those applications and build it and then use it as a standalone application or as part of a pipeline on Spring Cloud Data Flow.

2020-07-26

Hibernate JPA @Polymorphism Example

The @Polymorphism annotation is used to define the PolymorphismType Hibernate will apply to entity hierarchies.

There are two possible PolymorphismType options:

EXPLICIT
The currently annotated entity is retrieved only if explicitly asked.

IMPLICIT
The currently annotated entity is retrieved if any of its super entities are retrieved. This is the default option.

Implicit and explicit polymorphism

By default, when you query a base class entity, the polymorphic query will fetch all subclasses belonging to the base type.

However, you can even query interfaces or base classes that don’t belong to the JPA entity inheritance model.

For instance, considering the following DomainModelEntity interface:

Example : DomainModelEntity interface
public interface DomainModelEntity<ID> {

    ID getId();

    Integer getVersion();
}
If we have two entity mappings, a Book and a Blog, and the Blog entity is mapped with the @Polymorphism annotation and taking the PolymorphismType.EXPLICIT setting:

Example: @Polymorphism entity mapping
@Entity(name = "Event")
public static class Book implements DomainModelEntity<Long> {

@Id
private Long id;

@Version
private Integer version;

private String title;

private String author;

//Getter and setters omitted for brevity
}

@Entity(name = "Blog")
@Polymorphism(type = PolymorphismType.EXPLICIT)
public static class Blog implements DomainModelEntity<Long> {

@Id
private Long id;

@Version
private Integer version;

private String site;

//Getter and setters omitted for brevity
}

Hibernate JPA @Persister Example

The @Persister annotation is used to specify a custom entity or collection persister.

For entities, the custom persister must implement the EntityPersister interface.

For collections, the custom persister must implement the CollectionPersister interface.

Define a custom entity persister

The @Persister annotation is used to specify a custom entity or collection persister.

For entities, the custom persister must implement the EntityPersister interface.

For collections, the custom persister must implement the CollectionPersister interface.

Example : Entity persister mapping

@Entity
@Persister( impl = EntityPersister.class )
public class Author {

    @Id
    public Integer id;

    @OneToMany( mappedBy = "author" )
    @Persister( impl = CollectionPersister.class )
    public Set<Book> books = new HashSet<>();

    //Getters and setters omitted for brevity

    public void addBook(Book book) {
        this.books.add( book );
        book.setAuthor( this );
    }
}
@Entity
@Persister( impl = EntityPersister.class )
public class Book {

    @Id
    public Integer id;

    private String title;

    @ManyToOne(fetch = FetchType.LAZY)
    public Author author;

    //Getters and setters omitted for brevity
}
By providing your own EntityPersister and CollectionPersister implementations, you can control how entities and collections are persisted into the database.

Hibernate JPA @Parent Example

The @Parent annotation is used to specify that the currently annotated embeddable attribute references back the owning entity.

Parent Reference the property as a pointer back to the owner (generally the owning entity).

@Parent mapping
The Hibernate-specific @Parent annotation allows you to reference the owner entity from within an embeddable.

Example 93. @Parent mapping usage
@Embeddable
public static class GPS {

private double latitude;

private double longitude;

@Parent
private City city;

//Getters and setters omitted for brevity

}

@Entity(name = "City")
public static class City {

@Id
@GeneratedValue
private Long id;

private String name;

@Embedded
@Target( GPS.class )
private GPS coordinates;

//Getters and setters omitted for brevity

}

Hibernate JPA @Parameter Example

The @Parameter annotation is a generic parameter (basically a key/value combination) used to parametrize other annotations, like @CollectionType, @GenericGenerator, and @Type, @TypeDef.

Parameter - Generic parameter (basically a key/value combination) used to parametrize other annotations.

Hibernate JPA @ParamDef Example

ParamDef is a parameter definition.

The @ParamDef annotation is used in conjunction with @FilterDef so that the Hibernate Filter can be customized with runtime-provided parameter values.


2020-07-25

Java - Linear Search Example

Linear search, also called as sequential search, is a very simple method used for searching an array for a particular value. It works by comparing the value to be searched with every element of the array one by one in a sequence until a match is found. Linear search is mostly used to search an unordered list of elements (array in which data elements are not sorted).
For example, if an array A[] is declared and initialized as,
int A[] = {10, 8, 2, 7, 3, 4, 9, 1, 6, 5};

and the value to be searched is VAL = 7, then searching means to find whether the value ‘7’ is present in the array or not.
If yes, then it returns the position of its occurrence.
Here,
POS = 3 (index starting from 0).

Algorithm for linear search:

LINEAR_SEARCH(A, N, VAL)
Step 1: [INITIALIZE] SET POS=-1
Step 2: [INITIALIZE] SETI=1
Step 3: Repeat Step 4 while I<=N
Step 4: IF A[I] = VAL
        SET POS=I
         PRINT POS
        Go to Step 6
      [END OF IF]
       SET I=I+1
       [END OF LOOP]
Step 5: IF POS = –1
        PRINT VALUE IS NOT PRESENT
        IN THE ARRAY
       [END OF IF]
Step 6: EXIT

Example:

int arr[] = {10, 8, 2, 7, 3, 4, 9, 1, 6, 5};
int search(int num, int arr[], int n) {
for(i=0;i<n;i++)
{
if(arr[i] == num)
{
return i;
}
}
return -1;
}

C, C++ (malloc, calloc, free and realloc) different type of Memory allocation/de-allocation functions

malloc() 

Allocates memory and returns a pointer to the first byte of allocated space.
The general syntax of malloc() is
ptr =(cast-type*)malloc(byte-size);

Example:
arr=(int*)malloc(10*sizeof(int));

calloc() 

Allocates space for an array of elements, initializes them to zero and returns a pointer to the memory.

The syntax of calloc() can be given as:
ptr=(cast-type*) calloc(n,elem-size);

free()

Frees previously allocated memory.

The general syntax of the free()function is,
free(ptr);

realloc() 

Alters the size of previously allocated memory.

The general syntax for realloc() can be given as,
ptr = realloc(ptr,newsize);

Inserting a Node at the End of a Linked List

Suppose we want to add a new node with data as the last node of the list.

Algorithm to insert a new node at the end of a linked list

Step 1: IF AVAIL = NULL
          Write OVERFLOW
          Go to Step 10
        [END OF IF]
Step 2: SET NEW_NODE = AVAIL
Step 3: SET AVAIL = AVAIL.NEXT
Step 4: SET NEW_NODE.DATA = VAL
Step 5: SET NEW_NODE.NEXT = NULL
Step 6: SET PTR = START
Step 7: Repeat Step 8 while PTR NEXT != NULL
Step 8: SET PTR = PTR.NEXT
       [END OF LOOP]
Step 9: SET PTR NEXT = NEW_NODE
Step 10: EXIT


we take a pointer variable PTR and initialize it with START. That is, PTR now points to the first node of the linked list. In the while loop, we traverse through the linked list to reach the last node. Once we reach the last node, in Step 9, we change the NEXT pointer of the last node to store the address of the new node. Remember that the NEXT field of the new node contains NULL, which signifies the end of the linked list.

Java Example:

class Node{
Node next;
int data;
}

Node insert_end(Node start, int num)
{
Node ptr, new_node;
int num;
new_node = new Node();
new_node.data = num;
new_node.next = null;
ptr = start;
while(ptr.next != null)
ptr = ptr.next;
ptr.next = new_node;
return start;
}

2020-07-24

Angular 10 : Tree View Example

The mat-tree provides a Material Design styled tree that can be used to display hierarchy data.

This tree builds on the foundation of the CDK tree and uses a similar interface for its data source input and template, except that its element and attribute selectors will be prefixed with mat- instead of cdk-.

There are two types of trees: Flat tree and nested tree. The DOM structures are different for these two types of trees.

Flat tree

In a flat tree, the hierarchy is flattened; nodes are not rendered inside of each other, but instead are rendered as siblings in sequence. An instance of TreeFlattener is used to generate the flat list of items from hierarchical data. The "level" of each tree node is read through the getLevel method of the TreeControl; this level can be used to style the node such that it is indented to the appropriate level.

<mat-tree>
  <mat-tree-node> parent node </mat-tree-node>
  <mat-tree-node> -- child node1 </mat-tree-node>
  <mat-tree-node> -- child node2 </mat-tree-node>
</mat-tree>

Nested tree

In Nested tree, children nodes are placed inside their parent node in DOM. The parent node has an outlet to keep all the children nodes.

<mat-tree>
   <mat-nested-tree-node>
     parent node
     <mat-nested-tree-node> -- child node1 </mat-nested-tree-node>
     <mat-nested-tree-node> -- child node2 </mat-nested-tree-node>
   </mat-nested-tree-node>
</mat-tree>

Add Child on Tree with dynamically:



HTML:

<mat-tree [dataSource]="dataSource" [treeControl]="treeControl">
  <mat-tree-node *matTreeNodeDef="let node" matTreeNodeToggle matTreeNodePadding>
    <button mat-icon-button disabled></button>
    <mat-checkbox class="checklist-leaf-node"
                  [checked]="checklistSelection.isSelected(node)"
                  (change)="todoLeafItemSelectionToggle(node)">{{node.item}}</mat-checkbox>
  </mat-tree-node>

  <mat-tree-node *matTreeNodeDef="let node; when: hasNoContent" matTreeNodePadding>
    <button mat-icon-button disabled></button>
    <mat-form-field>
      <mat-label>New item...</mat-label>
      <input matInput #itemValue placeholder="Ex. Lettuce">
    </mat-form-field>
    <button mat-button (click)="saveNode(node, itemValue.value)">Save</button>
  </mat-tree-node>

  <mat-tree-node *matTreeNodeDef="let node; when: hasChild" matTreeNodePadding>
    <button mat-icon-button matTreeNodeToggle
            [attr.aria-label]="'toggle ' + node.filename">
      <mat-icon class="mat-icon-rtl-mirror">
        {{treeControl.isExpanded(node) ? 'expand_more' : 'chevron_right'}}
      </mat-icon>
    </button>
    <mat-checkbox [checked]="descendantsAllSelected(node)"
                  [indeterminate]="descendantsPartiallySelected(node)"
                  (change)="todoItemSelectionToggle(node)">{{node.item}}</mat-checkbox>
    <button mat-icon-button (click)="addNewItem(node)"><mat-icon>add</mat-icon></button>
  </mat-tree-node>
</mat-tree>

TS/Javascript:

import {SelectionModel} from '@angular/cdk/collections';
import {FlatTreeControl} from '@angular/cdk/tree';
import {Component, Injectable} from '@angular/core';
import {MatTreeFlatDataSource, MatTreeFlattener} from '@angular/material/tree';
import {BehaviorSubject} from 'rxjs';

/**
 * Node for to-do item
 */
export class TodoItemNode {
  children: TodoItemNode[];
  item: string;
}

/** Flat to-do item node with expandable and level information */
export class TodoItemFlatNode {
  item: string;
  level: number;
  expandable: boolean;
}

/**
 * The Json object for to-do list data.
 */
const TREE_DATA = {
  Groceries: {
    'Almond Meal flour': null,
    'Organic eggs': null,
    'Protein Powder': null,
    Fruits: {
      Apple: null,
      Berries: ['Blueberry', 'Raspberry'],
      Orange: null
    }
  },
  Reminders: [
    'Cook dinner',
    'Read the Material Design spec',
    'Upgrade Application to Angular'
  ]
};

/**
 * Checklist database, it can build a tree structured Json object.
 * Each node in Json object represents a to-do item or a category.
 * If a node is a category, it has children items and new items can be added under the category.
 */
@Injectable()
export class ChecklistDatabase {
  dataChange = new BehaviorSubject<TodoItemNode[]>([]);

  get data(): TodoItemNode[] { return this.dataChange.value; }

  constructor() {
    this.initialize();
  }

  initialize() {
    // Build the tree nodes from Json object. The result is a list of `TodoItemNode` with nested
    //     file node as children.
    const data = this.buildFileTree(TREE_DATA, 0);

    // Notify the change.
    this.dataChange.next(data);
  }

  /**
   * Build the file structure tree. The `value` is the Json object, or a sub-tree of a Json object.
   * The return value is the list of `TodoItemNode`.
   */
  buildFileTree(obj: {[key: string]: any}, level: number): TodoItemNode[] {
    return Object.keys(obj).reduce<TodoItemNode[]>((accumulator, key) => {
      const value = obj[key];
      const node = new TodoItemNode();
      node.item = key;

      if (value != null) {
        if (typeof value === 'object') {
          node.children = this.buildFileTree(value, level + 1);
        } else {
          node.item = value;
        }
      }

      return accumulator.concat(node);
    }, []);
  }

  /** Add an item to to-do list */
  insertItem(parent: TodoItemNode, name: string) {
    if (parent.children) {
      parent.children.push({item: name} as TodoItemNode);
      this.dataChange.next(this.data);
    }
  }

  updateItem(node: TodoItemNode, name: string) {
    node.item = name;
    this.dataChange.next(this.data);
  }
}

/**
 * @title Tree with checkboxes
 */
@Component({
  selector: 'tree-checklist-example',
  templateUrl: 'tree-checklist-example.html',
  styleUrls: ['tree-checklist-example.css'],
  providers: [ChecklistDatabase]
})
export class TreeChecklistExample {
  /** Map from flat node to nested node. This helps us finding the nested node to be modified */
  flatNodeMap = new Map<TodoItemFlatNode, TodoItemNode>();

  /** Map from nested node to flattened node. This helps us to keep the same object for selection */
  nestedNodeMap = new Map<TodoItemNode, TodoItemFlatNode>();

  /** A selected parent node to be inserted */
  selectedParent: TodoItemFlatNode | null = null;

  /** The new item's name */
  newItemName = '';

  treeControl: FlatTreeControl<TodoItemFlatNode>;

  treeFlattener: MatTreeFlattener<TodoItemNode, TodoItemFlatNode>;

  dataSource: MatTreeFlatDataSource<TodoItemNode, TodoItemFlatNode>;

  /** The selection for checklist */
  checklistSelection = new SelectionModel<TodoItemFlatNode>(true /* multiple */);

  constructor(private _database: ChecklistDatabase) {
    this.treeFlattener = new MatTreeFlattener(this.transformer, this.getLevel,
      this.isExpandable, this.getChildren);
    this.treeControl = new FlatTreeControl<TodoItemFlatNode>(this.getLevel, this.isExpandable);
    this.dataSource = new MatTreeFlatDataSource(this.treeControl, this.treeFlattener);

    _database.dataChange.subscribe(data => {
      this.dataSource.data = data;
    });
  }

  getLevel = (node: TodoItemFlatNode) => node.level;

  isExpandable = (node: TodoItemFlatNode) => node.expandable;

  getChildren = (node: TodoItemNode): TodoItemNode[] => node.children;

  hasChild = (_: number, _nodeData: TodoItemFlatNode) => _nodeData.expandable;

  hasNoContent = (_: number, _nodeData: TodoItemFlatNode) => _nodeData.item === '';

  /**
   * Transformer to convert nested node to flat node. Record the nodes in maps for later use.
   */
  transformer = (node: TodoItemNode, level: number) => {
    const existingNode = this.nestedNodeMap.get(node);
    const flatNode = existingNode && existingNode.item === node.item
        ? existingNode
        : new TodoItemFlatNode();
    flatNode.item = node.item;
    flatNode.level = level;
    flatNode.expandable = !!node.children;
    this.flatNodeMap.set(flatNode, node);
    this.nestedNodeMap.set(node, flatNode);
    return flatNode;
  }

  /** Whether all the descendants of the node are selected. */
  descendantsAllSelected(node: TodoItemFlatNode): boolean {
    const descendants = this.treeControl.getDescendants(node);
    const descAllSelected = descendants.every(child =>
      this.checklistSelection.isSelected(child)
    );
    return descAllSelected;
  }

  /** Whether part of the descendants are selected */
  descendantsPartiallySelected(node: TodoItemFlatNode): boolean {
    const descendants = this.treeControl.getDescendants(node);
    const result = descendants.some(child => this.checklistSelection.isSelected(child));
    return result && !this.descendantsAllSelected(node);
  }

  /** Toggle the to-do item selection. Select/deselect all the descendants node */
  todoItemSelectionToggle(node: TodoItemFlatNode): void {
    this.checklistSelection.toggle(node);
    const descendants = this.treeControl.getDescendants(node);
    this.checklistSelection.isSelected(node)
      ? this.checklistSelection.select(...descendants)
      : this.checklistSelection.deselect(...descendants);

    // Force update for the parent
    descendants.every(child =>
      this.checklistSelection.isSelected(child)
    );
    this.checkAllParentsSelection(node);
  }

  /** Toggle a leaf to-do item selection. Check all the parents to see if they changed */
  todoLeafItemSelectionToggle(node: TodoItemFlatNode): void {
    this.checklistSelection.toggle(node);
    this.checkAllParentsSelection(node);
  }

  /* Checks all the parents when a leaf node is selected/unselected */
  checkAllParentsSelection(node: TodoItemFlatNode): void {
    let parent: TodoItemFlatNode | null = this.getParentNode(node);
    while (parent) {
      this.checkRootNodeSelection(parent);
      parent = this.getParentNode(parent);
    }
  }

  /** Check root node checked state and change it accordingly */
  checkRootNodeSelection(node: TodoItemFlatNode): void {
    const nodeSelected = this.checklistSelection.isSelected(node);
    const descendants = this.treeControl.getDescendants(node);
    const descAllSelected = descendants.every(child =>
      this.checklistSelection.isSelected(child)
    );
    if (nodeSelected && !descAllSelected) {
      this.checklistSelection.deselect(node);
    } else if (!nodeSelected && descAllSelected) {
      this.checklistSelection.select(node);
    }
  }

  /* Get the parent node of a node */
  getParentNode(node: TodoItemFlatNode): TodoItemFlatNode | null {
    const currentLevel = this.getLevel(node);

    if (currentLevel < 1) {
      return null;
    }

    const startIndex = this.treeControl.dataNodes.indexOf(node) - 1;

    for (let i = startIndex; i >= 0; i--) {
      const currentNode = this.treeControl.dataNodes[i];

      if (this.getLevel(currentNode) < currentLevel) {
        return currentNode;
      }
    }
    return null;
  }

  /** Select the category so we can insert the new item. */
  addNewItem(node: TodoItemFlatNode) {
    const parentNode = this.flatNodeMap.get(node);
    this._database.insertItem(parentNode!, '');
    this.treeControl.expand(node);
  }

  /** Save the node to database */
  saveNode(node: TodoItemFlatNode, itemValue: string) {
    const nestedNode = this.flatNodeMap.get(node);
    this._database.updateItem(nestedNode!, itemValue);
  }
}


Java - JEP 347: Enable C++14 Language Features

This features added in JDK 16. This includes being able to build with recent versions of various compilers that support C++11/14 language features.

Summary:
Allow the use of C++14 language features in JDK C++ source code, and give specific guidance about which of those features may be used in HotSpot code.

The purpose of this JEP is to formally allow C++ source code changes within the JDK to take advantage of C++14 language features.

Lists of new features for C++11 and C++14, along with links to their descriptions, can be found in the online documentation for some of the compilers and libraries:


  • C++ Standards Support in GCC
  • C++ Support in Clang
  • Visual C++ Language Conformance
  • libstdc++ Status
  • libc++ Status





2020-07-22

Spring @Bean Annotation Example

@Bean is a method-level annotation and a direct analog of the XML <bean/> element. The annotation supports some of the attributes offered by <bean/>, such as: * init-method * destroy-method * autowiring * name.

You can use the @Bean annotation in a @Configuration-annotated or in a @Component-annotated class.

Declaring a Bean:
To declare a bean, you can annotate a method with the @Bean annotation. You use this method to register a bean definition within an ApplicationContext of the type specified as the method’s return value. By default, the bean name is the same as the method name. The following example shows a @Bean method declaration:


@Configuration
public class AppConfig {

    @Bean
    public TransferServiceImpl transferService() {
        return new TransferServiceImpl();
    }
}
The preceding configuration is exactly equivalent to the following Spring XML:

<beans>
    <bean id="transferService" class="com.acme.TransferServiceImpl"/>
</beans>
Both declarations make a bean named transferService available in the ApplicationContext, bound to an object instance of type TransferServiceImpl, as the following text image shows:

transferService -> com.acme.TransferServiceImpl
You can also declare your @Bean method with an interface (or base class) return type, as the following example shows:


@Configuration
public class AppConfig {

    @Bean
    public TransferService transferService() {
        return new TransferServiceImpl();
    }
}
However, this limits the visibility for advance type prediction to the specified interface type (TransferService). Then, with the full type (TransferServiceImpl) known to the container only once, the affected singleton bean has been instantiated. Non-lazy singleton beans get instantiated according to their declaration order, so you may see different type matching results depending on when another component tries to match by a non-declared type (such as @Autowired TransferServiceImpl, which resolves only once the transferService bean has been instantiated).

If you consistently refer to your types by a declared service interface, your @Bean return types may safely join that design decision. However, for components that implement several interfaces or for components potentially referred to by their implementation type, it is safer to declare the most specific return type possible (at least as specific as required by the injection points that refer to your bean).
Bean Dependencies:
A @Bean-annotated method can have an arbitrary number of parameters that describe the dependencies required to build that bean. For instance, if our TransferService requires an AccountRepository, we can materialize that dependency with a method parameter, as the following example shows:


@Configuration
public class AppConfig {

    @Bean
    public TransferService transferService(AccountRepository accountRepository) {
        return new TransferServiceImpl(accountRepository);
    }
}
The resolution mechanism is pretty much identical to constructor-based dependency injection. See the relevant section for more details.

Receiving Lifecycle Callbacks:
Any classes defined with the @Bean annotation support the regular lifecycle callbacks and can use the @PostConstruct and @PreDestroy annotations from JSR-250. See JSR-250 annotations for further details.

The regular Spring lifecycle callbacks are fully supported as well. If a bean implements InitializingBean, DisposableBean, or Lifecycle, their respective methods are called by the container.

The standard set of *Aware interfaces (such as BeanFactoryAware, BeanNameAware, MessageSourceAware, ApplicationContextAware, and so on) are also fully supported.

The @Bean annotation supports specifying arbitrary initialization and destruction callback methods, much like Spring XML’s init-method and destroy-method attributes on the bean element, as the following example shows:


public class BeanOne {

    public void init() {
        // initialization logic
    }
}

public class BeanTwo {

    public void cleanup() {
        // destruction logic
    }
}

@Configuration
public class AppConfig {

    @Bean(initMethod = "init")
    public BeanOne beanOne() {
        return new BeanOne();
    }

    @Bean(destroyMethod = "cleanup")
    public BeanTwo beanTwo() {
        return new BeanTwo();
    }
}
By default, beans defined with Java configuration that have a public close or shutdown method are automatically enlisted with a destruction callback. If you have a public close or shutdown method and you do not wish for it to be called when the container shuts down, you can add @Bean(destroyMethod="") to your bean definition to disable the default (inferred) mode.

You may want to do that by default for a resource that you acquire with JNDI, as its lifecycle is managed outside the application. In particular, make sure to always do it for a DataSource, as it is known to be problematic on Java EE application servers.

The following example shows how to prevent an automatic destruction callback for a DataSource:


@Bean(destroyMethod="")
public DataSource dataSource() throws NamingException {
    return (DataSource) jndiTemplate.lookup("MyDS");
}
Also, with @Bean methods, you typically use programmatic JNDI lookups, either by using Spring’s JndiTemplate or JndiLocatorDelegate helpers or straight JNDI InitialContext usage but not the JndiObjectFactoryBean variant (which would force you to declare the return type as the FactoryBean type instead of the actual target type, making it harder to use for cross-reference calls in other @Bean methods that intend to refer to the provided resource here).

In the case of BeanOne from the example above the preceding note, it would be equally valid to call the init() method directly during construction, as the following example shows:


@Configuration
public class AppConfig {

    @Bean
    public BeanOne beanOne() {
        BeanOne beanOne = new BeanOne();
        beanOne.init();
        return beanOne;
    }

    // ...
}
When you work directly in Java, you can do anything you like with your objects and do not always need to rely on the container lifecycle.
Specifying Bean Scope
Spring includes the @Scope annotation so that you can specify the scope of a bean.

Using the @Scope Annotation
You can specify that your beans defined with the @Bean annotation should have a specific scope. You can use any of the standard scopes specified in the Bean Scopes section.

The default scope is singleton, but you can override this with the @Scope annotation, as the following example shows:


@Configuration
public class MyConfiguration {

    @Bean
    @Scope("prototype")
    public Encryptor encryptor() {
        // ...
    }
}
@Scope and scoped-proxy:
Spring offers a convenient way of working with scoped dependencies through scoped proxies. The easiest way to create such a proxy when using the XML configuration is the <aop:scoped-proxy/> element. Configuring your beans in Java with a @Scope annotation offers equivalent support with the proxyMode attribute. The default is no proxy (ScopedProxyMode.NO), but you can specify ScopedProxyMode.TARGET_CLASS or ScopedProxyMode.INTERFACES.

If you port the scoped proxy example from the XML reference documentation (see scoped proxies) to our @Bean using Java, it resembles the following:


// an HTTP Session-scoped bean exposed as a proxy
@Bean
@SessionScope
public UserPreferences userPreferences() {
    return new UserPreferences();
}

@Bean
public Service userService() {
    UserService service = new SimpleUserService();
    // a reference to the proxied userPreferences bean
    service.setUserPreferences(userPreferences());
    return service;
}
Customizing Bean Naming
By default, configuration classes use a @Bean method’s name as the name of the resulting bean. This functionality can be overridden, however, with the name attribute, as the following example shows:


@Configuration
public class AppConfig {

    @Bean(name = "myThing")
    public Thing thing() {
        return new Thing();
    }
}
Bean Aliasing
As discussed in Naming Beans, it is sometimes desirable to give a single bean multiple names, otherwise known as bean aliasing. The name attribute of the @Bean annotation accepts a String array for this purpose. The following example shows how to set a number of aliases for a bean:


@Configuration
public class AppConfig {

    @Bean({"dataSource", "subsystemA-dataSource", "subsystemB-dataSource"})
    public DataSource dataSource() {
        // instantiate, configure and return DataSource bean...
    }
}
Bean Description:
Sometimes, it is helpful to provide a more detailed textual description of a bean. This can be particularly useful when beans are exposed (perhaps through JMX) for monitoring purposes.

To add a description to a @Bean, you can use the @Description annotation, as the following example shows:


@Configuration
public class AppConfig {

    @Bean
    @Description("Provides a basic example of a bean")
    public Thing thing() {
        return new Thing();
    }
}

Spring @CrossOrigin Example

Cross-origin resource sharing (CORS) is a W3C specification implemented by most browsers that allows you to specify in a flexible way what kind of cross domain requests are authorized, instead of using some less secured and less powerful hacks like IFRAME or JSONP.

As of Spring Framework 4.2, CORS is supported out of the box. CORS requests (including preflight ones with an OPTIONS method) are automatically dispatched to the various registered HandlerMappings. They handle CORS preflight requests and intercept CORS simple and actual requests thanks to a CorsProcessor implementation (DefaultCorsProcessor by default) in order to add the relevant CORS response headers (like Access-Control-Allow-Origin) based on the CORS configuration you have provided.

[Note]
Be aware that cookies are not allowed by default to avoid increasing the surface attack of the web application (for example via exposing sensitive user-specific information like CSRF tokens). Set allowedCredentials property to true in order to allow them.

[Note]
Since CORS requests are automatically dispatched, you do not need to change the DispatcherServlet dispatchOptionsRequest init parameter value; using its default value (false) is the recommended approach.

Controller method CORS configuration

You can add an @CrossOrigin annotation to your @RequestMapping annotated handler method in order to enable CORS on it. By default @CrossOrigin allows all origins and the HTTP methods specified in the @RequestMapping annotation:

@RestController
@RequestMapping("/account")
public class AccountController {

@CrossOrigin
@RequestMapping("/{id}")
public Account retrieve(@PathVariable Long id) {
// ...
}

@RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
public void remove(@PathVariable Long id) {
// ...
}
}
It is also possible to enable CORS for the whole controller:

@CrossOrigin(origins = "https://domain2.com", maxAge = 3600)
@RestController
@RequestMapping("/account")
public class AccountController {

@RequestMapping("/{id}")
public Account retrieve(@PathVariable Long id) {
// ...
}

@RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
public void remove(@PathVariable Long id) {
// ...
}
}
In the above example CORS support is enabled for both the retrieve() and the remove() handler methods, and you can also see how you can customize the CORS configuration using @CrossOrigin attributes.

You can even use both controller-level and method-level CORS configurations; Spring will then combine attributes from both annotations to create merged CORS configuration.

@CrossOrigin(maxAge = 3600)
@RestController
@RequestMapping("/account")
public class AccountController {

@CrossOrigin("https://domain2.com")
@RequestMapping("/{id}")
public Account retrieve(@PathVariable Long id) {
// ...
}

@RequestMapping(method = RequestMethod.DELETE, path = "/{id}")
public void remove(@PathVariable Long id) {
// ...
}
}

Global CORS configuration

In addition to fine-grained, annotation-based configuration you’ll probably want to define some global CORS configuration as well. This is similar to using filters but can be declared within Spring MVC and combined with fine-grained @CrossOrigin configuration. By default all origins and GET, HEAD, and POST methods are allowed.

 JavaConfig

Enabling CORS for the whole application is as simple as:

@Configuration
@EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {

@Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/**");
}
}
You can easily change any properties, as well as only apply this CORS configuration to a specific path pattern:

@Configuration
@EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {

@Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/api/**")
.allowedOrigins("https://domain2.com")
.allowedMethods("PUT", "DELETE")
.allowedHeaders("header1", "header2", "header3")
.exposedHeaders("header1", "header2")
.allowCredentials(true)
.maxAge(3600);
}
}

 XML namespace

The following minimal XML configuration enables CORS for the /** path pattern with the same default properties as with the aforementioned JavaConfig examples:

<mvc:cors>
<mvc:mapping path="/**" />
</mvc:cors>
It is also possible to declare several CORS mappings with customized properties:

<mvc:cors>

<mvc:mapping path="/api/**"
allowed-origins="https://domain1.com, https://domain2.com"
allowed-methods="GET, PUT"
allowed-headers="header1, header2, header3"
exposed-headers="header1, header2"
max-age="123" />

<mvc:mapping path="/resources/**"
allowed-origins="https://domain1.com" />

</mvc:cors>

Advanced Customization

CorsConfiguration allows you to specify how the CORS requests should be processed: allowed origins, headers, methods, etc. It can be provided in various ways:

AbstractHandlerMapping#setCorsConfiguration() allows to specify a Map with several CorsConfiguration instances mapped to path patterns like /api/**.
Subclasses can provide their own CorsConfiguration by overriding the AbstractHandlerMapping#getCorsConfiguration(Object, HttpServletRequest) method.
Handlers can implement the CorsConfigurationSource interface (like ResourceHttpRequestHandler now does) in order to provide a CorsConfiguration instance for each request.

Filter based CORS support

In order to support CORS with filter-based security frameworks like Spring Security, or with other libraries that do not support natively CORS, Spring Framework also provides a CorsFilter. Instead of using @CrossOrigin or WebMvcConfigurer#addCorsMappings(CorsRegistry), you need to register a custom filter defined like bellow:

import org.springframework.web.cors.CorsConfiguration;
import org.springframework.web.cors.UrlBasedCorsConfigurationSource;
import org.springframework.web.filter.CorsFilter;

public class MyCorsFilter extends CorsFilter {

public MyCorsFilter() {
super(configurationSource());
}

private static UrlBasedCorsConfigurationSource configurationSource() {
CorsConfiguration config = new CorsConfiguration();
config.setAllowCredentials(true);
config.addAllowedOrigin("https://domain1.com");
config.addAllowedHeader("*");
config.addAllowedMethod("*");
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", config);
return source;
}
}

2020-07-20

Introducing Java Functions for Spring Cloud Stream Applications - Part 1

Last week Spring posted Introducing Java Functions for Spring Cloud Stream Applications - Part 0
to announce the release of Spring Cloud Stream applications 2020.0.0-M2.
Here, explore function composition, one of the more powerful features enabled by the function oriented architecture presented in Part 0. If you haven’t had a chance to read Part 0, now would be a great time!

Function Composition

Function composition has a solid theoretical foundation in mathematics and computer science.
In practical terms, it is a way to join a sequence of functions to create a more complex function.

Let’s look at a simple example using Java functions. We have two functions, reverse and upper.
Each accepts a String as input and produces a String as output. We can compose them using the built-in andThen method. The composite function is itself a Function<String, String>.
If you run this, it will print ESREVER.

Function<String, String> reverse = s -> new StringBuilder(s).reverse().toString();
Function<String, String> upper = String::toUpperCase;
Function<String, String> reverseUpper = reverse.andThen(upper);
System.out.println(reverseUpper.apply("reverse"));

Tip
in addition to andThen, java.util.Function includes compose which first applies the argument (b) and then applies a to the result.
Thus, a.compose(b).apply(s) is equivalent to a.apply(b.apply(s)).
Function Composition in Spring Cloud Function
Spring Cloud Function includes some great features to take composing functions to another level.

Declarative Composition

If we define our functions from the above example as Spring beans,

@Bean
Function<String, String> reverse() {
    return s -> new StringBuilder(s).reverse().toString();
}

@Bean
Function<String, String> upper() {
    return String::toUpperCase;
}

we can compose these functions using the spring.cloud.function.definition property spring.cloud.function.definition=upper|reverse

Here | is a composition operator which results in an auto-configured bean implementing the composite function, along with related resources to let you seamlessly invoke the composite function.

Composition With Supplier and Consumer
Spring Cloud Function extends native Java Function composition to support composition with Supplier and Consumer.

This follows from concepts which are implicitly true:

A Function composed with a Consumer is a Consumer

A Supplier composed with a Function is a Supplier

A Supplier composed with a Consumer is a valid processing model (with no inputs or outputs, this form of composition does not map to a functional interface, but is analogous to Runnable)

As we shall see, Spring Cloud Stream Applications employ these concepts to great effect.

Type Conversion
When using function composition, we have to consider compatible argument types.
Using native Java composition, we can do compose a Function<Integer,String> with a Function<String, Integer> into a Function<Integer, Integer> :

Function<Integer, String> intToStr = String::valueOf;
Function<String, Integer> doubleit = i -> Integer.parseInt(i) * 2;
Function<Integer, Integer> composite = intToStr.andThen(doubleit);
composite.apply(10);
When running a Spring application, Spring Cloud Function uses Spring’s standard type conversion support to coerce function arguments as needed.
Given the following Function bean definitions, the function definition intToStr|doubleit works as expected, converting the String to an Integer.

@Bean
Function<Integer, Integer> doubleit() {
    return i -> i * 2;
}

@Bean
Function<Integer, String> intToStr() {
return String::valueOf;
}

In addition to converting primitives, Spring functions can convert between Message and POJO, JSON String and POJO, and more.
For example, the following functions can be composed in either order:

@Bean
Function<Integer, Integer> doubleit() {
    return i -> i * 2;
}

@Bean
Function<Integer, Message<String>> convertIntMessage() {
    return i -> MessageBuilder.withPayload(String.valueOf(i)).build();
}

Function Composition in Spring Cloud Stream
Spring Cloud Stream 3.x builds on Spring Cloud Function to fully support a functional programming model. The fundamental premise of Spring Cloud Stream is that it enables a function to execute in a distributed environment. The binder binds the input(s) and output(s) of a function packaged in a Spring Boot application, to configured message broker destinations so that the output produced by one function is consumed as the input of another remotely running function. We can think of a data streaming pipeline as just a distributed composition of functional components.

To illustrate this, a typical Spring Cloud Stream pipeline like

source | processor1 | processor2 | processor3 | sink
is logically equivalent to

supplier | function1 | function2 | function3 | sink
This idea leads to some interesting architectural choices since we can use function composition to combine some or all of these components into a single application.

For example we can implement the sequence of three processors as a single application, let’s call it composed-processor, packaging function1, function2, and function3,and composed by spring.cloud.function.definition=function1|function2|function3. Now the pipeline can be deployed as:

source | composed-processor | sink
Even simpler, we can create a composed-source to do all the processing within the source:

composed-source | sink
As always, there is no right answer here. There are always trade-offs to consider:

Function composition results in less deployments. This reduces cost, latency, operational complexity, and so on.

Individual deployments are loosely coupled and can scale independently.

The message broker provides guaranteed delivery. When a simple stateless application goes down and is restarted, it can continue where it left off, processing the pending results of the previous processing step.

A single application that performs complex processing is harder to reason about and keeps intermediate processing results in memory, or possibly in an interim data store. When a stateful application fails, it can lead to inconsistent state, making recovery harder.

If these trade-offs look familiar, it’s because they are pretty much the same as any microservice vs monolith debate. In the end, do what works best for you.

Function Composition with Prepackaged Source Applications
In some cases, function composition is a no-brainer. From the start, we have provided pre-packaged processors to perform simple transformations, or filtering using SpEL. The legacy architecture required a separate processor when using the prepackaged sources or sinks. A common complaint from users was “why do I need to deploy a separate application just to evaluate a SpEL expression?” To address this, we initially introduced a form of support for function composition in an earlier release. To use this feature with the prepackaged applications required forking them to modify the code or the build dependencies to provide the functions.

The current release provides function composition out of the box for all of the prepackaged sources. Specifically, a source can now be composed with prepackaged functions to perform any of the following locally:

execute SpEL transformations

enrich message headers

filter events

produce task launch requests

As an example, we can compose the time source with a header enricher and filter with configuration properties and run it as a standalone Spring boot application:

java -jar target/time-source-rabbit-3.0.0-SNAPSHOT.jar
--spring.cloud.stream.bindings.output.destination=even       --spring.cloud.function.definition=timeSupplier|headerEnricherFunction|filterFunction
--header.enricher.headers=seconds=T(java.lang.Integer).valueOf(payload .substring(payload.length() - 2))
--filter.function.expression=headers[seconds]%2==0
This will publish the time, such as `07/16/20 16:43:48, every other second whenever the number of seconds is even, to the configured destination even.

Here we are using a prepackaged time source for RabbitMQ, binding the output to a topic exchange named even. The binder will create the exchange if it does not exist. The function definition extends the supplier to extract the seconds, convert it to an integer and store it in the seconds message header and then filter on the value of the header. Only even values pass the filter.

Task Launch Requests
In 2018, we introduced a reference architecture for running file ingest with Spring Cloud Data Flow and Spring Batch. To do this, we forked the sftp source as sftp-dataflow, specifically to implement a prepackaged source that produces task launch requests. The task launch request is a simple value object, rendered as JSON, and consumed by the tasklauncher-sink. The sink acts as a client to Data Flow to launch a batch application per the request. We initially chose sftp since it is the most commonly used protocol for file processing. However, we realized that the same pattern can be applied to any source. We now can do this with function composition. Along with the standard sftp source , we can trigger a task launch from ftp, file, s3, and so one. Even the time source can be used to launch a task at regular intervals.

This somewhat contrived example produces task launch requests:

java -jar target/time-source-rabbit-3.0.0-SNAPSHOT.jar
--spring.cloud.stream.bindings.output.destination=time-test
--spring.cloud.stream.function.definition=timeSupplier|spelFunction|headerEnricherFunction|taskLaunchRequestFunction
--spel.function.expression=payload.length()
--header.enricher.headers=task-id=payload*2
--task.launch.request.task-name-expression="'task-'+headers['task-id']
The payload, as JSON, is {"args":[],"deploymentProps":{},"name":"task-34"}

Function composition with user written code
In reality, when users develop a Spring Cloud Stream pipeline, they are likely to select a source and sink from our prepackaged Spring Cloud Stream Applications. Processors are typically user-written code, implementing specific business logic. If you are writing a processor, or want to extend a source or sink, any of the functions are available to you. Since we publish the functions as separate artifacts, you can simply include them in your dependencies. You can either use declarative composition, as shown above, or you can inject them into your code and invoke them programmatically. Of course, you can easily integrate your own functions as well.

2020-07-17

Using OAuth 2.0 to Access Google APIs | How to generate oauth token ?

All requests to the Google API must be authorized by an authenticated user. To access any of the google product you need oAuth token.

Authorizing requests with OAuth 2.0:

Setup your appliacation:

Go to below link and setup your application, Once application is create get your OAuth client ID

https://console.developers.google.com/apis/dashboard

Get access Code:

To get a code, you need to run a server. Install tomcat and run a simple dynamic application with only homepage.
Let think application is running on http://localhost/

Create below url and open it on chrome: 

https://accounts.google.com/o/oauth2/auth?scope=https://www.googleapis.com/auth/blogger&response_type=code&access_type=offline&redirect_uri=http://localhost&client_id=YourClientID

Login With valid google account, Once you are Login successfully. It will redirect to localhost with code.

http://localhost/?code=4/2AGswerwyro2KlmBXgXV8A73877482hsjdfhsj7AAtTERUCut7VD_Ty2gVlIlGMyD5xTxBFcYudifu7847&scope=https://www.googleapis.com/auth/blogger

Copy the code and Open postman and any rest api.

Do rest call with below parameters.



Host: oauth2.googleapis.com
Content-Type: application/x-www-form-urlencoded

code=4/P7q7W91a-oMsCeLvIaQm6bTrgtp7&
client_id=your_client_id&
client_secret=your_client_secret&
redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob%3Aauto&
grant_type=authorization_code

It will response :
{
  "access_token": "1/fFAGRNJru1FTz70BzhT3Zg",
  "expires_in": 3920,
  "token_type": "Bearer",
  "refresh_token": "1//xEoDL4iW3cxlI7yDbSRFYNG01kVKM2C-259HOF2aQbI"
}

access_token is your Authorization token. Use it for google api call.

GET https://www.googleapis.com/blogger/v3/users/self/blogs
Authorization: /* OAuth 2.0 token here */

Example:

GET https://www.googleapis.com/blogger/v3/users/self/blogs
Authorization: Bearer 1/fFAGRNJru1FTz70BzhT3Zg

Java APi call: Sending GET request with Authentication headers


HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
headers.set("Authorization", "Bearer "+accessToken);

HttpEntity<String> entity = new HttpEntity<String>(requestJson,headers);
String result = restTemplate.postForObject(url, entity, String.class);

You can use it for all google products like blogger, Drive, YouTube, Gmail, Map, news ect.

2020-07-15

Spring Boot and Vaadin : Filtering rows in Vaadin Grid

Adding a text field for filtering:
Start by adding a text field above the grid. Remember, MainView is a VerticalLayout, so you need to add the text field before the grid.
MainView.java
public class MainView extends VerticalLayout {
  private ContactService contactService;
  private Grid<Contact> grid = new Grid<>(Contact.class);
  private TextField filterText = new TextField(); ①
  public MainView(ContactService contactService) {
  this.contactService = contactService;
  addClassName("list-view");
  setSizeFull();
  configureFilter(); ②
  configureGrid();
  add(filterText, grid); ③
  updateList();
  }
  private void configureFilter() {
  filterText.setPlaceholder("Filter by name..."); ④
  filterText.setClearButtonVisible(true); ⑤
  filterText.setValueChangeMode(ValueChangeMode.LAZY); ⑥
  filterText.addValueChangeListener(e -> updateList()); ⑦
  }
  // Grid configuration omitted
}
① Creates a field for the TextField.
② Calls the configureFilter() method to configure what the filter should do.
③ Updates the add() method call to add both filterText and grid.
④ Sets placeholder text so users know what to type in the field.
⑤ Sets the clear button to visible so users can easily clear the filter.
⑥ Sets the value change mode to LAZY so the text field will notify you of changes automatically after a short timeout in typing.
⑦ Calls the updateList method whenever the value changes. We’ll update the logic to filter the content shortly.
Implementing filtering in the back end We could implement the filtering in two ways:
1. Keep a copy of the contacts list in the view and filter it using Java streams.
2. Defer the filtering to the back end (database).
It’s a best practice to avoid keeping references to lists of objects in Vaadin views, as this can lead to excessive memory usage.
We’ll add filtering support to the back end:
1. Amend ContactService as follows:
ContactService.java
public class ContactService {
  private static final Logger LOGGER = Logger.getLogger(ContactService.class
.getName());
  private ContactRepository contactRepository;
  private CompanyRepository companyRepository;
  public ContactService(ContactRepository contactRepository,
  CompanyRepository
companyRepository) {
  this.contactRepository = contactRepository;
  this.companyRepository = companyRepository;
  }
  public List<Contact> findAll() {
  return contactRepository.findAll();
  }
  public List<Contact> findAll(String stringFilter) { ①
  if (stringFilter == null || stringFilter.isEmpty()) { ②
  return contactRepository.findAll();
  } else {
  return contactRepository.search(stringFilter); ③
  }
  }
  // remaining methods omitted
}
① Adds a new findAll method that takes a filter text as a parameter.
② If the filter text is not empty, search the database for that text.
③ Otherwise, return all contacts
2. Add the search method to the contacts repository.
ContactRepository.java
public interface ContactRepository extends JpaRepository<Contact, Long> {
  @Query("select c from Contact c " +
  "where lower(c.firstName) like lower(concat('%', :searchTerm, '%')) " +
  "or lower(c.lastName) like lower(concat('%', :searchTerm, '%'))") ①
  List<Contact> search(@Param("searchTerm") String searchTerm); ②
}
① Uses the @Query annotation to define a custom query. In this case, it checks if the string matches the first or the last name, and ignores the case. The query uses Java Persistence Query Language (JPQL) which is an SQL-like language for querying JPA managed databases.
② Selects the Spring Framework import for @Param.
3. Update the way MainView updates the contacts. This is the method that is called every time the filter text field changes.
MainView.java
private void updateList() {
  grid.setItems(contactService.findAll(filterText.getValue()));
}
4. Build the application and try out the filtering. You should be able to filter the contacts by entering a term in the text field.
So far, we’ve created an application that shows and filters contacts that are stored in a database. Next, we’ll add a form to add, remove, and edit contacts.

Spring Boot and Vaadin : Creating a Spring Boot backend: database, JPA repositories, and services

Most real-life applications need to persist and retrieve data from a database. In this tutorial, we use an in-memory H2 database. You can easily adapt the configuration to use another database, like MySQL or Postgres.
There are a fair number of classes to copy and paste to set up your backend. You can make your life easier by downloading a project with all the changes, if you prefer. The download link is at the end of this chapter. The code from the previous tutorial chapter can be found here, if you want to jump directly into this chapter.
Installing the database dependencies
We use Spring Data for data access. Under the hood, it uses Hibernate to map Java objects to database entities through the Java Persistence API. Spring Boot takes care of configuring all these tools for you.
To add database dependencies:
1. In the <dependencies> tag in your pom.xml file, add the following dependencies for H2 and Spring Data:
pom.xml
<dependencies>
  <!--all existing dependencies -->
  <!--database dependencies -->
  <dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
  </dependency>
  <dependency>
  <groupId>com.h2database</groupId>
  <artifactId>h2</artifactId>
  <scope>runtime</scope>
  </dependency>
</dependencies>
2. Save your file and when IntelliJ asks if you want to enable automatic importing of Maven dependencies, select Enable Auto-Import.
If IntelliJ doesn’t ask you to import dependencies, or if you use another IDE, type mvn install in the command line (while in the root of your project folder) to download the dependencies.
NOTE
H2 is a great database for tutorials because you don’t need to install external software. If
you prefer, you can easily change to another database.
See:
•Setting up MySQL
•Setting up Postgres
The instructions in the remainder of this tutorial are the same, regardless of which database you use. To keep things simple, we recommend sticking with H2.
Defining the data model
Our application is a customer relationship management (CRM) system that manages contacts and companies. To map content to our database, we need to create the
following entity classes:
• Contact: An employee at a company.
• Company: An entity that can have several employees.
• AbstractEntity: A common superclass for both.
To create your entity classes:
1. Create a new package: com.vaadin.tutorial.crm.backend.entity.
2. Create three classes, AbstractEntity, Contact, and Company, in the new package,
using the code detailed below.
The easiest way to do this is to copy the full class and paste it into the package in the project view. IntelliJ (and most other IDEs) will automatically create the Java file for you.
a. Start by adding AbstractEntity, the common superclass. It defines how objects ids are generated and how object equality is determined.
AbstractEntity.java
package com.vaadin.tutorial.crm.backend.entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.MappedSuperclass;
@MappedSuperclass
public abstract class AbstractEntity {
  @Id
  @GeneratedValue(strategy= GenerationType.SEQUENCE)
  private Long id;
  public Long getId() {
  return id;
  }
  public boolean isPersisted() {
  return id != null;
  }
  @Override
  public int hashCode() {
  if (getId() != null) {
  return getId().hashCode();
  }
  return super.hashCode();
  }
  @Override
  public boolean equals(Object obj) {
  if (this == obj) {
  return true;
  }
  if (obj == null) {
  return false;
  }
  if (getClass() != obj.getClass()) {
  return false;
  }
  AbstractEntity other = (AbstractEntity) obj;
if (getId() == null || other.getId() == null) {
  return false;
  }
  return getId().equals(other.getId());
  }
}
b. Next, create the Contact class:
Contact.java
38
package com.vaadin.tutorial.crm.backend.entity;
import javax.persistence.*;
import javax.validation.constraints.Email;
import javax.validation.constraints.NotEmpty;
import javax.validation.constraints.NotNull;
@Entity
public class Contact extends AbstractEntity implements Cloneable {
  public enum Status {
  ImportedLead, NotContacted, Contacted, Customer, ClosedLost
  }
  @NotNull
  @NotEmpty
  private String firstName = "";
  @NotNull
  @NotEmpty
  private String lastName = "";
  @ManyToOne
  @JoinColumn(name = "company_id")
  private Company company;
  @Enumerated(EnumType.STRING)
  @NotNull
  private Contact.Status status;
  @Email
  @NotNull
  @NotEmpty
  private String email = "";
  public String getEmail() {
  return email;
  }
  public void setEmail(String email) {
  this.email = email;
  }
  public Status getStatus() {
  return status;
  }
  public void setStatus(Status status) {
  this.status = status;
  }
  public String getLastName() {
  return lastName;
  }
  public void setLastName(String lastName) {
  this.lastName = lastName;
  }
  public String getFirstName() {
  return firstName;
  }
  public void setFirstName(String firstName) {
  this.firstName = firstName;
  }
  public void setCompany(Company company) {
  this.company = company;
  }
  public Company getCompany() {
  return company;
  }
  @Override
  public String toString() {
  return firstName + " " + lastName;
  }
}
c. Finally, copy over the Company class:
Company.java
package com.vaadin.tutorial.crm.backend.entity;
import javax.persistence.*;
import java.util.LinkedList;
import java.util.List;
@Entity
public class Company extends AbstractEntity {
  private String name;
  @OneToMany(mappedBy = "company", fetch = FetchType.EAGER)
  private List<Contact> employees = new LinkedList<>();
  public Company() {
  }
  public Company(String name) {
  setName(name);
  }
  public String getName() {
  return name;
  }
  public void setName(String name) {
  this.name = name;
  }
  public List<Contact> getEmployees() {
  return employees;
  }
}
3. Verify that you’re able to build the project successfully.
If you see a lot of errors about missing classes, double check the Maven dependencies and run mvn install to make sure they are downloaded. Creating repositories to access the database Now that you have defined the data model, the next step is to create repository classes to access the database. Spring Boot makes this a painless process. All you need to do is define an interface that describes the entity type and primary key type, and Spring Data will configure it for you.
To create your repository classes:
41
1. Create a new package com.vaadin.tutorial.crm.backend.repository.
2. Copy the following two repository classes into the package:
ContactRepository.java
package com.vaadin.tutorial.crm.backend.repository;
import com.vaadin.tutorial.crm.backend.entity.Contact;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.repository.query.Param;
import java.util.List;
public interface ContactRepository extends JpaRepository<Contact, Long> {
}
CompanyRepository.java
package com.vaadin.tutorial.crm.backend.repository;
import com.vaadin.tutorial.crm.backend.entity.Company;
import org.springframework.data.jpa.repository.JpaRepository;
public interface CompanyRepository extends JpaRepository<Company, Long> {
}
Creating service classes for business logic
It’s good practice to not let UI code access the database directly. Instead, we create service classes that handle business logic and database access. This makes it easier for you to control access and to keep your data consistent. To create your service classes:
1. Create a new package com.vaadin.tutorial.crm.backend.service.
2. Copy the following two service classes into the package:

ContactService.java
package com.vaadin.tutorial.crm.backend.service;
import com.vaadin.tutorial.crm.backend.entity.Contact;
import com.vaadin.tutorial.crm.backend.repository.CompanyRepository;
import com.vaadin.tutorial.crm.backend.repository.ContactRepository;
import org.springframework.stereotype.Service;
import java.util.List;
import java.util.logging.Level;
import java.util.logging.Logger;
@Service ①
public class ContactService {
  private static final Logger LOGGER = Logger.getLogger(ContactService.class
.getName());
  private ContactRepository contactRepository;
  private CompanyRepository companyRepository;
  public ContactService(ContactRepository contactRepository,
  CompanyRepository
companyRepository) { ②
  this.contactRepository = contactRepository;
  this.companyRepository = companyRepository;
  }
  public List<Contact> findAll() {
  return contactRepository.findAll();
  }
  public long count() {
  return contactRepository.count();
  }
  public void delete(Contact contact) {
  contactRepository.delete(contact);
  }
  public void save(Contact contact) {
  if (contact == null) { ③
  LOGGER.log(Level.SEVERE,
  "Contact is null. Are you sure you have connected your form
to the application?");
  return;
  }
  contactRepository.save(contact);
  }
}
① The @Service annotation lets Spring know that this is a service class and makes it available for injection. This allows you to easily use it from your UI code later on.
② The constructor takes 2 parameters: ContactRepository and CompanyRepository. Spring provides instances based on the interfaces we defined earlier.
③ For now, most operations are just passed through to the repository. The only exception is the save method, which checks for null values before attempting to save.
CompanyService.java
package com.vaadin.tutorial.crm.backend.service;
import com.vaadin.tutorial.crm.backend.entity.Company;
import com.vaadin.tutorial.crm.backend.repository.CompanyRepository;
import org.springframework.stereotype.Service;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@Service
public class CompanyService {
  private CompanyRepository companyRepository;
  public CompanyService(CompanyRepository companyRepository) {
  this.companyRepository = companyRepository;
  }
  public List<Company> findAll() {
  return companyRepository.findAll();
  }
}
Populating with test data
Next, we add a method that generates test data to populate our database. This makes it easier to work with the application. To do this, add the following method at the end of ContactService:
ContactService.java
@PostConstruct ①
public void populateTestData() {
  if (companyRepository.count() == 0) {
  companyRepository.saveAll( ②
  Stream.of("Path-Way Electronics", "E-Tech Management", "Path-E-Tech
Management")
  .map(Company::new)
  .collect(Collectors.toList()));
  }
  if (contactRepository.count() == 0) {
  Random r = new Random(0);
  List<Company> companies = companyRepository.findAll();
  contactRepository.saveAll( ③
  Stream.of("Gabrielle Patel", "Brian Robinson", "Eduardo Haugen",
  "Koen Johansen", "Alejandro Macdonald", "Angel Karlsson", "Yahir
Gustavsson", "Haiden Svensson",
  "Emily Stewart", "Corinne Davis", "Ryann Davis", "Yurem Jackson",
"Kelly Gustavsson",
  "Eileen Walker", "Katelyn Martin", "Israel Carlsson", "Quinn
Hansson", "Makena Smith",
  "Danielle Watson", "Leland Harris", "Gunner Karlsen", "Jamar Olsson
", "Lara Martin",
  "Ann Andersson", "Remington Andersson", "Rene Carlsson", "Elvis
Olsen", "Solomon Olsen",
  "Jaydan Jackson", "Bernard Nilsen")
  .map(name -> {
  String[] split = name.split(" ");
  Contact contact = new Contact();
  contact.setFirstName(split[0]);
  contact.setLastName(split[1]);
  contact.setCompany(companies.get(r.nextInt(companies.size())));
  contact.setStatus(Contact.Status.values()[r.nextInt(Contact
.Status.values().length)]);
  String email = (contact.getFirstName() + "." + contact
.getLastName() + "@" + contact.getCompany().getName().replaceAll("[\\s-]", "") +
".com").toLowerCase();
  contact.setEmail(email);
  return contact;
  }).collect(Collectors.toList()));
  }
}
① The @PostConstruct annotation tells Spring to run this method after constructing
ContactService.
② Creates 3 test companies.
③ Creates test contacts.

Restart the server to pick up all the new dependencies You need to stop and restart the application to make sure all the new POM dependencies are picked up correctly. You can download the project with a fully set-up back end below. Unzip the project and follow the instructions in the importing chapter.
Download from GitHub
In the next chapter, we’ll use the back end to populate data into a data grid in the browser.