2022-01-31

Prepare new list based on conditions from two object lists using stream

I am working in spring boot application, working with lists.

I have these classes :

public class MyModel {
  private String pptId;
  private String pptTitle;
  private String modelNumber;
}

public class FullData { 
    private String pptTitle;
    private String modelNumber; 
    private String pptDetails;  
    private String price;
    ...............
    ..............
}

List sourceModelList = This is full list

MyModel(1,'ppt1','a1')
MyModel(1,'ppt1','a2')
MyModel(2,'ppt2','a1')
MyModel(2,'ppt2','a3')
MyModel(2,'ppt2','a4')
MyModel(3,'ppt3','a1')
MyModel(3,'ppt3','a3')
MyModel(3,'ppt3','a5')

I have filtered FullData list but that is filtered from some processing

List filteredFullDataList = it is unique list

FullData(null,'a1','pptDetails1','300')
FullData(null,,'a2','pptDetails21','70')
FullData(null,,'a4','pptDetails41','10')
FullData(null,'a5','pptDetails13','45')

Now I need to set the title and prepare list as in the order it present in sourceArticleList, only I have to remove non-existing modelNumber which is not present in filteredFullDataList as a3. But I need repeated models as they are presented in another ppt.

We need final list of FullData as :

FullData('ppt1','a1','pptDetails1','300')
FullData('ppt1',,'a2','pptDetails21','70')
FullData('ppt2','a1','pptDetails1','300')
FullData('ppt2','a4','pptDetails41','10')
FullData('ppt3','a1','pptDetails1','300')
FullData('ppt5','a5','pptDetails13','45')

I have tried streams and processed by seeting slide then preparing the FullData object and it's list, but it not working properly.

I need this below code into streams

 List<FullData> finalFullData = new ArrayList();
        for (MyModel myModel : sourceModelList) {
          for (FullData fullData : filteredFullDataList) {
            if (myModel.getModelNumber().equals(fullData.getModelNumber())) {
              fullData.setPptTitle(myModel.setPptTitle());
              finalFullData.add(fullData);
            }
          }
        }


from Recent Questions - Stack Overflow https://ift.tt/Y6Djcb9OH
https://ift.tt/Mhf8FyP6J

How do insert data into a table that already exists?

I'm trying to insert data into a table that already exists, but I cant find anything on how to do this. I only found how to insert this data into a new table.

Syntax error at or near Insert

Tutorial I visited

 SELECT film_category.film_id, film_category.category_id, rental_duration, rental_rate
 INSERT INTO category_description
 FROM film_category
 LEFT JOIN FILM
 ON film_category.film_id = film.film_id


from Recent Questions - Stack Overflow https://ift.tt/qB6VOS4HU
https://ift.tt/Mhf8FyP6J

What does explicitly-defaulted move constructor do?

I'm new to C++ and need some help on move constructors. I have some objects that are move-only, every object has different behaviours, but they all have a handle int id, so I tried to model them using inheritance, here's the code

#include <iostream>
#include <vector>

class Base {
  protected:
    int id;

    Base() : id(0) { std::cout << "Base() called " << id << std::endl; }
    virtual ~Base() {}
    Base(const Base&) = delete;
    Base& operator=(const Base&) = delete;
    Base(Base&& other) noexcept = default;
    Base& operator=(Base&& other) noexcept = default;
};

class Foo : public Base {
  public:
    Foo(int id) {
        this->id = id;
        std::cout << "Foo() called " << id << std::endl;
    }
    ~Foo() { std::cout << "~Foo() called " << id << std::endl; }
    
    Foo(const Foo&) = delete;
    Foo& operator=(const Foo&) = delete;
    Foo(Foo&& other) noexcept = default;
    Foo& operator=(Foo&& other) noexcept = default;
};

int main() {
    std::vector<Foo> foos;

    for (int i = 33; i < 35; i++) {
        auto& foo = foos.emplace_back(i);
    }

    std::cout << "----------------------------" << std::endl;
    return 0;
}

Each derived class has a specific destructor that destroys the object using id (if id is 0 it does nothing), I need to define it for every derived type. In this case, the compiler won't generate implicitly-declared copy/move ctors for me, so I have to explicitly make it move-only to follow the rule of five, but I don't understand what the =default move ctor does.

When the second foo(34) is constructed, vector foos reallocates memory and moves the first foo(33) to the new allocation, however, I saw that both the source and target of this move operation has an id of 33, so after the move, foo(33) is destroyed, leaving an invalid foo object in the vector. In the output below, I also didn't see a third ctor call, so what on earth is foo(33) being swapped with? a null object that somehow has an id of 33? Where does that 33 come from, from copy? but I've explicitly deleted copy ctor.

Base() called 0
Foo() called 33
Base() called 0
Foo() called 34
~Foo() called 33  <---- why 33?
----------------------------
~Foo() called 33
~Foo() called 34

Now if I manually define the move ctor instead:

class Foo : public Base {
  public:
    ......

    // Foo(Foo&& other) noexcept = default;
    // Foo& operator=(Foo&& other) noexcept = default;

    Foo(Foo&& other) noexcept { *this = std::move(other); }
    Foo& operator=(Foo&& other) noexcept {
        if (this != &other) {
            std::swap(id, other.id);
        }
        return *this;
    }
}
Base() called 0
Foo() called 33
Base() called 0
Foo() called 34
Base() called 0  <-- base call
~Foo() called 0  <-- now it's 0
----------------------------
~Foo() called 33
~Foo() called 34

this time it's clearly swapping foo(33) with a base(0) object, after id 0 is destroyed, my foo object is still valid. So what's the difference between the defaulted move ctor and my own move ctor?

As far as I understand, I almost never need to manually define my move ctor body and move assignment operator unless I'm directly allocating memory on the heap. Most of the time I'll only be using raw data types such as int, float, or smart pointers and STL containers that support std::swap natively, so I thought I would be just fine using =default move ctors everywhere and let the compiler does the swap memberwise, which seems to be wrong? Perhaps I should always define my own move ctor for every single class? How can I ensure the swapped object is in a clean null state that can be safely destructed?



from Recent Questions - Stack Overflow https://ift.tt/NW4T5qQOf
https://ift.tt/Mhf8FyP6J

How to map json response to the model with different field names

I am using an ASP.NET Core 6 and System.Text.Json library.

For example, I'm getting a response from the some API with the following structure

{
   "items": 
   [
       {
          "A": 1,
          "User": 
          {
             "Name": "John",
             "Age": 21,
             "Adress": "some str"
          },
       },
       {
          "A": 2,
          "User": 
          {
             "Name": "Alex",
             "Age": 22,
             "Adress": "some str2"
          },
       }
   ]
}

And I want to write this response to the model like List<SomeEntity>, where SomeEntity is

    public class SomeEntity
    {
        public int MyA { get; set; } // map to A
        public User MyUser { get; set; } // map to User
    }

    public class User
    {
        public string Name { get; set; }
        public string MyAge { get; set; } // map to Age
    }

How could I do it?

UPDATE:

Is it possible to map nested properties?

    public class SomeEntity
    {
        // should I add an attribute [JsonPropertyName("User:Name")] ?
        public string UserName{ get; set; } // map to User.Name
    }


from Recent Questions - Stack Overflow https://ift.tt/Nq1WXhbSu
https://ift.tt/Mhf8FyP6J

How can I load the spData-package?

I want to load in the map data, which is found in spData.

  1. I tried to install the package:
    install.packages("spData")
  1. Then I tried to load the package:
    library(spData)

When I do this I get the following error:

Error: package or namespace load failed for ‘spData’ in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]): namespace ‘terra’ 1.4-22 is being loaded, but >= 1.5.12 is required

How do I solve this?



from Recent Questions - Stack Overflow https://ift.tt/Jf2gXuH1S
https://ift.tt/Mhf8FyP6J

Got unexpected field names: ['is_dynamic_op']

I am working on a low light video processing project where I am getting some errors in some areas, For this code...

params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
    precision_mode='FP16',
    is_dynamic_op = True)

I am getting this error...

> --------------------------------------------------------------------------- ValueError                                Traceback (most recent call
> last) <ipython-input-8-326230ed5373> in <module>()
>       2 params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
>       3     precision_mode='FP16',
> ----> 4     is_dynamic_op = True)
>       5 
>       6 # Convert the model
> 
> /usr/lib/python3.7/collections/__init__.py in _replace(_self, **kwds)
>     414         result = _self._make(map(kwds.pop, field_names, _self))
>     415         if kwds:
> --> 416             raise ValueError(f'Got unexpected field names: {list(kwds)!r}')
>     417         return result
>     418 
> 
> ValueError: Got unexpected field names: ['is_dynamic_op']

I have used these libraries,

from glob import glob
from PIL import Image
from matplotlib import pyplot as plt
from mirnet.inference import Inferer
from mirnet.utils import download_dataset, plot_result
from tensorflow.python.compiler.tensorrt import trt_convert as trt

import tensorflow as tf
import numpy as np
import time

I can't figure out how to solve the problem, I have imported all the libraries but am still stuck, please help...



from Recent Questions - Stack Overflow https://ift.tt/X1H5hV2mi
https://ift.tt/Mhf8FyP6J

How to completely download anaconda bz2 files and dependencies for offline package installation?

I want to install an anaconda package in offline mode. I have tried to download bz2 files in other pc using this command: conda install packagename --download-only and moved the bz2 file into offline pc and installed it using conda install *.bz2. It works for simple package. But for complex package like atoti with a lot of dependencies, it seems that it cant be completely installed as other packages in offline pc are not compatible with atoti. Is there any way to download all packages .bz2 files at once?



from Recent Questions - Stack Overflow https://ift.tt/QGr0Ol1Rh
https://ift.tt/Mhf8FyP6J

2022-01-30

Both Require and import not working javascript

I am trying to create a cli tool to make a todo list. For some reason I can't figure out I'm unable to use either require or import when trying to import the Chalk package for highlighting terminal outputs

here is what I have for my index.js file

#! /usr/bin/env node
const { program } = require("commander");
const list = require("./commands/list.js");

program.command("list").description("List all the TODO tasks").action(list);

program.parse();

Here is my list.js file

#! /usr/bin/env node

const conf = new (require("conf"))();
const chalk = require("chalk");
function list() {
  const todoList = conf.get("todo-list");
  if (todoList && todoList.length) {
    console.log(
      chalk.blue.bold(
        "Tasks in green are done. Tasks in yellow are still not done."
      )
    );
    todoList.forEach((task, index) => {
      if (task.done) {
        console.log(chalk.greenBright(`${index}. ${task.text}`));
      } else {
        console.log(chalk.yellowBright(`${index}. ${task.text}`));
      }
    });
  } else {
    console.log(chalk.red.bold("You don't have any tasks yet."));
  }
}
module.exports = list;

and my package.json file

{
  "name": "near-clear-state",
  "version": "1.0.0",
  "description": "Tool to let NEAR users clear the state of their account ",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "type": "commonjs",
  "author": "Dorian",
  "license": "ISC",
  "dependencies": {
    "chalk": "^5.0.0",
    "commander": "^8.3.0",
    "conf": "^10.1.1",
    "near-api-js": "^0.44.2"
  },
  "bin": {
    "near-clear-state": "./index.js"
  }
}

when I try running anything from this cli tool I'm making I get this error if I use require

➜  near-clear-state near-clear-state --help
/Users/doriankinoocrutcher/Documents/NEAR/Developer/near-clear-state/commands/list.js:4
const chalk = require("chalk");
              ^

Error [ERR_REQUIRE_ESM]: require() of ES Module /Users/doriankinoocrutcher/Documents/NEAR/Developer/near-clear-state/node_modules/chalk/source/index.js from /Users/doriankinoocrutcher/Documents/NEAR/Developer/near-clear-state/commands/list.js not supported.
Instead change the require of index.js in /Users/doriankinoocrutcher/Documents/NEAR/Developer/near-clear-state/commands/list.js to a dynamic import() which is available in all CommonJS modules.
    at Object.<anonymous> (/Users/doriankinoocrutcher/Documents/NEAR/Developer/near-clear-state/commands/list.js:4:15)
    at Object.<anonymous> (/Users/doriankinoocrutcher/Documents/NEAR/Developer/near-clear-state/index.js:3:14) {
  code: 'ERR_REQUIRE_ESM'
}

Or this error when i use import

/Users/doriankinoocrutcher/Documents/NEAR/Developer/near-clear-state/commands/list.js:3
import { Chalk } from "chalk";
^^^^^^

SyntaxError: Cannot use import statement outside a module
    at Object.compileFunction (node:vm:352:18)
    at wrapSafe (node:internal/modules/cjs/loader:1026:15)
    at Module._compile (node:internal/modules/cjs/loader:1061:27)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1149:10)
    at Module.load (node:internal/modules/cjs/loader:975:32)
    at Function.Module._load (node:internal/modules/cjs/loader:822:12)
    at Module.require (node:internal/modules/cjs/loader:999:19)
    at require (node:internal/modules/cjs/helpers:102:18)
    at Object.<anonymous> (/Users/doriankinoocrutcher/Documents/NEAR/Developer/near-clear-state/index.js:3:14)
    at Module._compile (node:internal/modules/cjs/loader:1097:14)

Node.js v17.4.0

Please help me



from Recent Questions - Stack Overflow https://ift.tt/2u7NDfFeX
https://bit.ly/3GblJNq

Spring Data JpaRepository saveAndFlush not working in @Transactional test method

Using JPA Spring Data I have created a couple of entities Ebook and Review, which are linked by a ManyToOne relationship from Review towards Ebook. I have tried to insert a review for an existent ebook, but the result is not being reflected in the database (Postgres DB), even though I am using a saveAndFlush inside a @Transactional. I retrieve inserted review correctly, it is just that is not saving the record in the table.

Entities:

Ebook:

@Entity
public class Ebook {

    @Id
    @SequenceGenerator(
            name="ebook_sequence",
            sequenceName = "ebook_sequence",
            allocationSize = 1
    )
    @GeneratedValue (
            strategy = GenerationType.SEQUENCE,
            generator ="ebook_sequence"
    )
    private Long idEbook;
    private String title;
}

Review:

@Entity
public class Review {
    @Id
    @SequenceGenerator(
            name="sequence_review",
            sequenceName = "sequence_review",
            allocationSize = 1
    )
    @GeneratedValue(
            strategy = GenerationType.SEQUENCE,
            generator = "sequence_review"
    )
    private Long idReview;
    private String reviewText;

    //ManytoOne Ebook
    @ManyToOne(
            cascade = CascadeType.ALL,
            fetch = FetchType.LAZY
    )
    @JoinColumn(
            name = "id_ebook",
            referencedColumnName = "idEbook"
    )
    private Ebook ebook;
}

And the test in which I am trying to insert just a review corresponding to a preexisting ebook record:

    @Test
    @Transactional
    public void saveReviewWithEbook() {

        Ebook ebook = ebookRepository.getById((long)1);

        Review review = Review.builder()
                .reviewText("review text ebook 1")
                .ebook(ebook)
                .build();

        Review reviewInserted = reviewRepository.saveAndFlush(review); //saveAndFlush(review);
        Review reviewLoaded = reviewRepository.getById(reviewInserted.getIdReview());
        System.out.println("reviewLoaded = " + reviewLoaded);
    }

After executing it, there is no record inserted in table 'review', although data appear correctly in variable 'reviewLoaded'. Why is that? How can I save just a review corresponding to an existing ebook?



from Recent Questions - Stack Overflow https://ift.tt/rWbeniC2N
https://bit.ly/3GblJNq

JSON data type in entity class in php [duplicate]

I am using symfony/doctrine orm in my php and database is mysql. As of now I am creating manual table in mysql by using below command.

create table FileUpload (id int , countryCode varchar(30),fileData json);

I want Doctrine to create this table and datatype of filedata should be json. As of now in my entity class I am taking datatype as string. what datatype I should take in entity class so that in DB data with JSON datatype should be created?

What modification I need to do in below my existing class?

FileUpload.php

<?php

namespace App\Entity;

use ApiPlatform\Core\Annotation\ApiResource;
use Doctrine\ORM\Mapping as ORM;
use Symfony\Component\Validator\Constraints as Assert;

/** 
 * @ORM\Entity 
 * @ORM\Table(name="FileUpload") 
 */
class FileUpload
{
    /**
     * @ORM\Column(type="string")
     * @ORM\Id
     */
    private $countryCode;

    /** 
     * @ORM\Column(type = "string") 
     */
    private $fileData;



    public function __construct(
        string $countryCode,
        string $fileData
    ) {
        $this->countryCode = $countryCode;
        $this->fileData = $fileData;
    }


    /** 
     * Set countryCode 
     * 
     * @param string $countryCode 
     * 
     * @return FileUpload 
     */

    public function setCountryCode($countryCode)
    {
        $this->countryCode = $countryCode;
        return $this;
    }

    /** 
     * Get countryCode 
     * 
     * @return string 
     */

    public function getCountryCode()
    {
        return $this->countryCode;
    }

    /**
     * Set fileData 
     * 
     * @param string $fileData
     * 
     * @return FileUpload 
     */

    public function setFileData($fileData)
    {
        $this->fileData = $fileData;
        return $this;
    }

    /** 
     * Get fileData 
     * 
     * @return string 
     */

    public function getFileData()
    {
        return $this->fileData;
    }
}


from Recent Questions - Stack Overflow https://ift.tt/HTMbfvEiG
https://bit.ly/3GblJNq

How does Entity Framework map to tables in SQL?

How does EF handle the mapping to the SQL tables? I understand the general idea, but I changed the name of my model in C#, but it's still pointing to the table when using the context object. I expected it to break, but I am guessing it is cached somewhere? Is that how it is handled deep inside EF somewhere?

More detail: This is persistent when the console app stops, and then restarts. The model has a different name, but EF still somehow goes to the table.



from Recent Questions - Stack Overflow https://ift.tt/kgQU4Ex9h
https://bit.ly/3GblJNq

Monitor active warps and threads during a divergent CUDA run

I implemented some CUDA code. It runs fine but the alogrithm inherently produces a strong thread divergence. This is expected.

I will later try to reduce divergence. But for the moment I would be happy to be able to measure it.

Is there an easy way (prefereably using a runtime API call or a CLI tool) to check how many of my initially scheduled warps and/or threads are still active?



from Recent Questions - Stack Overflow https://ift.tt/r9u2LYw8M
https://bit.ly/3GblJNq

2022-01-29

Move subtitle to left direction (aligning to y axis area) in ggplot2

Given sample data and ggplot plotting code below:

df <- data.frame(Seller=c("Ad","Rt","Ra","Mo","Ao","Do"), 
                 Avg_Cost=c(5.30,3.72,2.91,2.64,1.17,1.10), Num=c(6:1))

text <- "Real estate agents often refer to a home's curb appeal, the first impression 
it makes on potential buyers. As a would-be seller, it's important to take as dispassionate 
a look as possible at the outside of your home."

ggplot(df, aes(x=reorder(Seller, Num), y=Avg_Cost)) +
  geom_bar(stat='identity') +
  coord_flip() +
  labs(
    title = 'Costs of Selling a Home',
    subtitle = stringr::str_wrap(text, 80)
  ) +
  theme(
    plot.title = element_text(hjust = 0.5),
    plot.subtitle = element_text(hjust = 0), 
    plot.margin = unit(c(0.1, 0, 0, 0), "cm")
  )

Result:

enter image description here

I attempt to slightly adjust subtitle to left direction (as the red arrow shows), the final subtitle will be relocated in the area of black rectangle.

I've tried by adjusting values of hjust in plot.subtitle(), but I didn't get the chance of success. How could I achieve that? Thanks.

Reference:

https://r-charts.com/ggplot2/margins/



from Recent Questions - Stack Overflow https://ift.tt/3Ho6Ybq
https://ift.tt/3HaN3wQ

how are compose services implemented?

I am wondering how compose implements services. To my understanding, each thing that compose does could be done with the docker CLI. For example, creating container, binding volumes, exposing ports and joining them on networks.

The one thing that is a blackbox in my understanding is how compose achieves the concept of a service as a unit. So that when you specify replicas under the deploy key, you get DNS round-robin kind of load balancing, similar to when you specify --endpoint-mode dnsrr in with swarm.

Can this actually be achieved with CLI commands, or does compose do some tricks with the SDK? In both cases, my question would be what exactly happens there?



from Recent Questions - Stack Overflow https://ift.tt/3uafMOn
https://ift.tt/eA8V8J

Oracle instant client failing on ubuntu-based agent despite correct TNS_ADMIN path

I am attempting to perform an SQL query using oracle-instantclient-basic-21.5 through an Ubuntu 20.04.3 agent hosted by Azure Devops. The query itself (which reads: python query_data) works when I am running it on my own machine with specs:

  • Windows 10
  • Path=C:\oracle\product\11.2.0.4\client_x64\bin;...;...
  • TNS_ADMIN=C:\oracle\product\tns
  • Python 3.8.5 using sqlalchemy with driver="oracle" and dialect = "cx_oracle"

I am running the following:

pool:
  vmImage: 'ubuntu-latest'

steps:
 - script: |
    sudo apt install alien
  displayName: 'Install alien'

 - script: |
    sudo alien -i oracle-instantclient-basic-21.5.0.0.0-1.x86_64.rpm
  displayName: 'Install oracle-instantclient-basic'

 - script: |
    sudo sh -c 'echo /usr/lib/oracle/21/client64/ > /etc/ld.so.conf.d/oracle-instantclient.conf'
    sudo ldconfig
  displayName: 'Update the runtime link path'

 - script: |
    sudo cp tns/TNSNAMES.ORA /usr/lib/oracle/21/client64/lib/network/admin
    sudo cp tns/ldap.ORA /usr/lib/oracle/21/client64/lib/network/admin
    sudo cp tns/SQLNET.ORA /usr/lib/oracle/21/client64/lib/network/admin
    sudo cp tns/krb5.conf /usr/lib/oracle/21/client64/lib/network/admin
  displayName: 'Copy and paste correct TNS content'

 - task: UsePythonVersion@0
  inputs:
    versionSpec: '3.8'

 - script: |
    export ORACLE_HOME=/usr/lib/oracle/21/client64
    export PATH=$ORACLE_HOME/bin:$PATH
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
    export TNS_ADMIN=$ORACLE_HOME/lib/network/admin
    python query_data
  displayName: 'Attempt to run python script with locally valid environment variables'

with the error TNS:could not resolve the connect identifier specified. What I have done:

  • Checked that the locations I am referring to match the actual oracle-instantclient-basic installation

  • Copied the TNSNAMES.ORA, ldap.ORA etc. that I am using on my own machine and verified that they are present in the desired location (/usr/lib/oracle/21/client64/lib/network/admin)

  • Checked that TNS_ADMIN points to the correct path (/usr/lib/oracle/21/client64/lib/network/admin)

The sql query does not complain about a missing client, so it is aware of the installation. Why doesn't it read the TNS_ADMIN path or its contents correctly?



from Recent Questions - Stack Overflow https://ift.tt/3rY57nm
https://ift.tt/eA8V8J

sum of column based on another column will show on another

Can you help me with my DataGridView? I need to show sum of column based on another column.

For example, I have Part number 1,2,3. QTY for PN is always 1 because of the serial number given.

PN 1 has 10qty (10rows). I need to sum it based on that PN and put the sum value at the end cell. Please see below sample:

Sample datagridview

It is excel I know, but just please bear with me pretend it is DataGridView.

Total sum is based on sum of cost for the same PN.



from Recent Questions - Stack Overflow https://ift.tt/32LgNBu
https://ift.tt/3G8ikPx

Leaflet big GeoJson file Ajax filter best performance

I have four 2MB geoJson files with four Layer to load like

LayerBoon = L.geoJSON.ajax(URL, {pointToLayer:returnBoonMarker, filter:filtertext}); 

with a filter function and this button click function

$("#btnFindText").click(function(){
    SeachTXT = $("#txtFind").val();
    LayerSt.refresh();
    LayerPr.refresh();
    LayerHL.refresh();
    LayerBoon.refresh();
})

every Layer have to re-filter by clicking the button.

when filtering, is it possible not to reload the file each time, keep it in cache and filter it again?



from Recent Questions - Stack Overflow https://ift.tt/3Hixwv2
https://ift.tt/eA8V8J

2022-01-28

how to collect from stream using multiple conditions

I'm trying to sort a list of Message object, this entity contain multiple attributs but only 4 are useful for us in this case :

  • Integer : Ordre
  • String : idOras
  • Date : sentDate
  • Integer : OrdreCalcule (a concatination of Ordre and sentDate "YYYYmmDDhhMMss" )

if this case, the conditions of selection are the following :

  • if two Messages have the same Ordre :
    • if they have the same idOras -> collect the newest one (newest sentDate) and remove the others
    • if they have different idOras -> collect both of them sorted by sentDate ASC
  • if two Messages have different Ordre :
    • collect both of them sorted by Ordre

for now I'm using this stream :

orasBatchConfiguration.setSortedZbusOrasList(messageList.stream()
            .collect(Collectors.groupingBy(Message::getIdOras,
                    Collectors.maxBy(Comparator.comparing(Message::getOrdreCalcule))))
            .values()
            .stream()
            .map(Optional::get)
            .sorted(Comparator.comparing(Message::getOrdreCalcule))
            .collect(Collectors.toList()));


from Recent Questions - Stack Overflow https://ift.tt/3G9n8Ea
https://ift.tt/eA8V8J

How to add columns names to a spatialpolygon based on column from a dataframe?

Dummy SpatialPolygon:

      x_coord  y_coord
 [1,] 16.48438 59.73633
 [2,] 17.49512 55.12207
 [3,] 24.74609 55.03418
 [4,] 22.59277 61.14258
 [5,] 16.48438 59.73633

library(sp)
p = Polygon(xym)
ps = Polygons(list(p),1)
sps = SpatialPolygons(list(ps))
plot(sps)

Dummy dataframe:

df <- data.frame (date= c("2021", "2015", "2018"),
                  value= c(100, 147, 25))

Basic question but how can I add the columns names of the dataframe to the spatial polygon (I don't need to add some value, I just want my spatialpolygon to have the field "date" and "value")



from Recent Questions - Stack Overflow https://ift.tt/3KJqAZX
https://ift.tt/eA8V8J

sort and group the dictionary data using Python based on common value

I have list of dictionary. I want to arrange it based on weekday and group by common load value for that weekday, hour and load also need to add end time based on common load values. Merge the start time and end time only if they are in a sequence. hours will start from 00:00 till 23:00. While grouping last hour's end time will be 24:00. example is given below in required_output. few example of required output given below.

data =

[{"masterpeakLoadId":12,'hour': '01:00', 'day': 'Friday', 'load': 1.0}, 
 {"masterpeakLoadId":31,'hour': '05:00', 'day': 'Friday', 'load': 71.0},
 {'masterpeakLoadId':37, 'hour': '06:00', 'day': 'Friday', 'load': 71.0}, 
 {'masterpeakLoadId':54, 'hour': '11:00', 'day': 'Friday', 'load': 5.0},
 {'masterpeakLoadId':59, 'hour': '12:00', 'day': 'Friday', 'load': 6.0},
 {'masterpeakLoadId':65, 'hour': '13:00', 'day': 'Friday', 'load': 7.0},
 {'masterpeakLoadId':82, 'hour': '18:00', 'day': 'Friday', 'load': 5.0},
 {'masterpeakLoadId':87, 'hour': '23:00', 'day': 'Friday', 'load': 6.0},
 {'masterpeakLoadId':92, 'hour': '20:00', 'day': 'Friday', 'load': 7.0}, 
 {'masterpeakLoadId':105, 'hour': '02:00', 'day': 'Friday', 'load': 1.0}, 
 {'masterpeakLoadId':117, 'hour': '04:00', 'day': 'Friday', 'load': 5.0}, 
 {'masterpeakLoadId':125, 'hour': '00:00', 'day': 'Friday', 'load': 1.0},
 {'masterpeakLoadId':132, 'hour': '03:00', 'day': 'Friday', 'load': 66.0}, 
 {'masterpeakLoadId':8, 'hour': '01:00', 'day': 'Monday', 'load': 25.0},
 {'masterpeakLoadId':27, 'hour': '00:00', 'day': 'Monday', 'load': 6.0}, 
 {'masterpeakLoadId':33, 'hour': '06:00', 'day': 'Monday', 'load': 45.0}]

required output =

[{"day":'Friday',"start_time":00:00, "end_time":03:00, "load":1.0 },
{"day":'Friday',"start_time":03:00, "end_time":04:00, "load":66.0},
{"day":'Friday',"start_time":04:00, "end_time":05:00, "load":5.0},
{"day":'Friday',"start_time":05:00, "end_time":07:00, "load":71.0},
{"day":'Monday',"start_time":23:00, "end_time":24:00, "load":6.0},
{"day":'Monday',"start_time":01:00, "end_time":02:00, "load":25.0},
{'day':'Monday',"start_time":06:00, "end_time":07:00, "load":45.0}]

If you will see the output we have merged the start time 00:00 till 03:00 on Friday because hours are in a sequence for same day and load values are also same. I am new to python need help to achieve this result, following code I have tried.

peak Load = []
context = {}
sortedDictRecords = sorted(data, key=lambda d:d['day'])
print(sortedDictRecords)
groups = groupby(sortedDictRecords,key=lambda d:d['hour'])
sortedPeakLoadRecords = [ {'hour':sid,
                'loads':[(r.pop('hour'),r)[1] for r in record]}
               for sid,record in groups ]


from Recent Questions - Stack Overflow https://ift.tt/3AB47tg
https://ift.tt/eA8V8J

Deploying Uniswap v2 / Sushiswap or similar in Brownie, Hardhat or Truffle test suite

I am writing an automated test suite that needs to test functions against Uniswap v2 style automated market marker: do swaps and use different order routing. Thus, routers need to be deployed.

Are there any existing examples of how to deploy a testable Uniswap v2 style exchange in Brownie? Because Brownie is a minority of smart contract developers, are there any examples for Truffle or Hardhat?

I am also exploring the option of using a mainnet fork, but I am not sure if this operation is too expensive (slow) to be used in unit testing.



from Recent Questions - Stack Overflow https://ift.tt/3IDEf2A
https://ift.tt/eA8V8J

django admin The outermost 'atomic' block cannot use savepoint = False when autocommit is off

When I try to delete an item from a table generated by django admin, it throws this error

Environment:


Request Method: POST
Request URL: http://127.0.0.1:8000/admin/sybase_app/packageweight/?q=493

Django Version: 1.8
Python Version: 3.6.9
Installed Applications:
('django.contrib.admin',
 'django.contrib.auth',
 'django.contrib.contenttypes',
 'django.contrib.sessions',
 'django.contrib.messages',
 'django.contrib.staticfiles',
 'blog',
 'sybase_app')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
 'django.middleware.common.CommonMiddleware',
 'django.middleware.csrf.CsrfViewMiddleware',
 'django.contrib.auth.middleware.AuthenticationMiddleware',
 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
 'django.contrib.messages.middleware.MessageMiddleware',
 'django.middleware.clickjacking.XFrameOptionsMiddleware',
 'django.middleware.security.SecurityMiddleware')


Traceback:
File "/home/pd/.local/lib/python3.6/site-packages/django/core/handlers/base.py" in get_response
  132.                     response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/pd/.local/lib/python3.6/site-packages/django/contrib/admin/options.py" in wrapper
  616.                 return self.admin_site.admin_view(view)(*args, **kwargs)
File "/home/pd/.local/lib/python3.6/site-packages/django/utils/decorators.py" in _wrapped_view
  110.                     response = view_func(request, *args, **kwargs)
File "/home/pd/.local/lib/python3.6/site-packages/django/views/decorators/cache.py" in _wrapped_view_func
  57.         response = view_func(request, *args, **kwargs)
File "/home/pd/.local/lib/python3.6/site-packages/django/contrib/admin/sites.py" in inner
  233.             return view(request, *args, **kwargs)
File "/home/pd/.local/lib/python3.6/site-packages/django/utils/decorators.py" in _wrapper
  34.             return bound_func(*args, **kwargs)
File "/home/pd/.local/lib/python3.6/site-packages/django/utils/decorators.py" in _wrapped_view
  110.                     response = view_func(request, *args, **kwargs)
File "/home/pd/.local/lib/python3.6/site-packages/django/utils/decorators.py" in bound_func
  30.                 return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/pd/.local/lib/python3.6/site-packages/django/contrib/admin/options.py" in changelist_view
  1590.                 response = self.response_action(request, queryset=cl.get_queryset(request))
File "/home/pd/.local/lib/python3.6/site-packages/django/contrib/admin/options.py" in response_action
  1333.             response = func(self, request, queryset)
File "/home/pd/.local/lib/python3.6/site-packages/django/contrib/admin/actions.py" in delete_selected
  49.             queryset.delete()
File "/home/pd/.local/lib/python3.6/site-packages/django/db/models/query.py" in delete
  537.         collector.delete()
File "/home/pd/.local/lib/python3.6/site-packages/django/db/models/deletion.py" in delete
  282.         with transaction.atomic(using=self.using, savepoint=False):
File "/home/pd/.local/lib/python3.6/site-packages/django/db/transaction.py" in __enter__
  164.                         "The outermost 'atomic' block cannot use "

Exception Type: TransactionManagementError at /admin/sybase_app/packageweight/
Exception Value: The outermost 'atomic' block cannot use savepoint = False when autocommit is off.

How can I fix it?



from Recent Questions - Stack Overflow https://ift.tt/3o5hYTv
https://ift.tt/eA8V8J

2022-01-27

Typed events in Vue 3?

Currently I am manually casting an event:

const emit = defineEmits<{
  (e: 'update:modelValue', value: string | number): void
}>()

// [..]    

<input
  type="text"
  :value="modelValue"
  @input="emit('update:modelValue', ($event.target as // manually
HTMLInputElement).value)"                             // casted
/>

Is there any better way than this? Any way around having to cast it?

Hint: I am not using v-model here because the shown code is part of a component (on which v-model will be used then)



from Recent Questions - Stack Overflow https://ift.tt/3fXP1EG
https://ift.tt/eA8V8J

Google app engine Django app is crashing intermittently but the log files aren't showing me why

I have a Django 4.0 application running on google app engine, and for the most part it works fine. However I have a particular page which seems to crash the application after I load the page several times. On my laptop I don't see this behavior, so I'm trying to debug what is going wrong when it is running on GAE but I don't have much visibility into what is happening. Watching the logs doesn't tell me anything interesting, just that the workers are shutting down and then that they are restarting:

gcloud app logs tail -s default

2022-01-26 16:02:38 default[fixeddev]  2022-01-26 08:02:38,933 common.views INFO     Application started
2022-01-26 16:03:40 default[fixeddev]  "GET /organization/clean_up_issues/ HTTP/1.1" 200
2022-01-26 16:03:56 default[fixeddev]  "GET /organization/clean_up_issues/ HTTP/1.1" 200
2022-01-26 16:04:10 default[fixeddev]  "GET /organization/clean_up_issues/ HTTP/1.1" 500
2022-01-26 16:04:15 default[fixeddev]  [2022-01-26 16:04:15 +0000] [12] [INFO] Handling signal: term
2022-01-26 16:04:15 default[fixeddev]  [2022-01-26 08:04:15 -0800] [22] [INFO] Worker exiting (pid: 22)
2022-01-26 16:04:15 default[fixeddev]  [2022-01-26 08:04:15 -0800] [25] [INFO] Worker exiting (pid: 25)
2022-01-26 16:04:15 default[fixeddev]  [2022-01-26 08:04:15 -0800] [27] [INFO] Worker exiting (pid: 27)
2022-01-26 16:09:49 default[fixeddev]  "GET /_ah/start HTTP/1.1" 200
2022-01-26 16:09:49 default[fixeddev]  [2022-01-26 16:09:49 +0000] [10] [INFO] Starting gunicorn 20.1.0
2022-01-26 16:09:49 default[fixeddev]  [2022-01-26 16:09:49 +0000] [10] [INFO] Listening at: http://0.0.0.0:8081 (10)
2022-01-26 16:09:49 default[fixeddev]  [2022-01-26 16:09:49 +0000] [10] [INFO] Using worker: gthread
2022-01-26 16:09:49 default[fixeddev]  [2022-01-26 16:09:49 +0000] [21] [INFO] Booting worker with pid: 21
2022-01-26 16:09:49 default[fixeddev]  [2022-01-26 16:09:49 +0000] [24] [INFO] Booting worker with pid: 24
2022-01-26 16:09:49 default[fixeddev]  [2022-01-26 16:09:49 +0000] [25] [INFO] Booting worker with pid: 25
2022-01-26 16:09:49 default[fixeddev]  [2022-01-26 16:09:49 +0000] [26] [INFO] Booting worker with pid: 26
2022-01-26 16:09:50 default[fixeddev]  OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
2022-01-26 16:09:50 default[fixeddev]  OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
2022-01-26 16:09:50 default[fixeddev]  OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
2022-01-26 16:09:50 default[fixeddev]  OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
2022-01-26 16:09:53 default[fixeddev]  2022-01-26 08:09:53,151 common.views INFO     Application started

Where do I go to get more visibility into what is actually happening during these crashes? I'm relatively new to GAE and feel like I'm debugging blind since I can't reproduce this issue on my local dev machine, and no exception is getting logged. Each crash just produces a 500.

Unrelated bonus round question: does anyone know how to deal with the OpenBLAS warnings? Is this a real issue or just a nuisance that I can't seem to suppress?



from Recent Questions - Stack Overflow https://ift.tt/34aWJsk
https://ift.tt/eA8V8J

html css tree vertical alignment (using flex, row, flex-start)

This is a django + html, css and very minimal or no JS question of a Tree display using nested UL/LI

  • (so far) have found out already the work of displaying tree vertical/horizontal is done,
  • (current issue) aim is to display a dynamic tree or static tree using nested UL in multiple format or options for the end-users with different options (they may choose for easier visualization) like vertical tree, horizontal tree and goal on the left side or right side. so far those are achieved
  • currently the issue in working on the following to display vertically aligned horizontal tree with flex, and unable to succeed with the display as the wrap of the one tree path is making some impact on the next connectors (still trying to figure out which css selector and associated property will help to clear the gap)

Please find the attached screen shot and also the code

enter image description here

Latest Results Screenshot enter image description here

CSS code:

<style>
    body {
        padding-top: 10px;
    }
    
    .tree {
        list-style: none;
    }
    
    .tree,
    .tree * {
        margin: 0;
    }
    
    .tree li {
        display: flex;
        flex-direction: row;
        align-items: flex-start;
        position: relative;
        padding-left: 2vw;
    }
    
    .tree li::before {
        content: '';
        position: absolute;
        align-items: center;
        left: 0;
        top: 10%;
        border-top: 4px solid rgb(42, 165, 97);
        width: 2vw;
    }
    /* ========================================================= */
    
    .tree li::after {
        content: '';
        position: absolute;
        align-items: flex-start;
        left: 0;
        top:10%;
        margin-bottom: 2px;
    }
    
    .tree li:not(:only-child):after {
        content: '';
        position: absolute;
        left: 0;
        bottom: 10%;
        margin-top: 0px;
        margin-bottom: 2px;
        border-left: 3px solid rgb(172, 206, 20);
    }
  
    /* ========================================================= */
    
    .tree li:last-of-type::after {
        height: 50%;
        top: 0;
    }
    
    .tree li:first-of-type::after {
        height: 50%;
        bottom: 0;
    }
    
    .tree li:not(:first-of-type):not(:last-of-type)::after {
        height: 100%;
    }
    
    .tree ul,
    .tree ol {
        padding-left: 2vw;
        position: relative;
    }
    
    .tree ul:not(:empty)::before,
    .tree ol:not(:empty)::before {
        content: '';
        position: absolute;
        left: 0;
        top: 8%;
        border-top: 4px solid red;
        width: 2vw;
    }
    
    
    .tree span {
        border: 1px solid;
        text-align: center;
        padding: 0.33em 0.66em;
        background-color: yellow;
        color: blue;
    }
    
    .tree>li {
        padding-left: 0;
    }
    
    .tree>li::before,
    .tree>li::after {
        display: none;
    }
    
    ol,
    ul {
        counter-reset: section;
    }
    
    li span::before {
        counter-increment: section;
        /* content: counters(section, ".") " "; */
        font-family: monospace;
    }
    
    body {
        /* display: flex; */
        justify-content: center;
        align-items: center;
        height: 100%;
    }


from Recent Questions - Stack Overflow https://ift.tt/3rS6F2i
https://ift.tt/eA8V8J

try-catch instead of if in edge cases

Would it be a good idea to replace the if statements with try-catch in the following usecases (performance and readability wise?):

Example 1

public static void AddInitializable(GameObject initializable)
{
    if(!HasInstance)
    { // this should only happen if I have forgotten to instantiate the GameManager manually
        Debug.LogWarning("GameManager not found.");
        return;
    }

    instance.initializables.Add(initializable);
    initializable.SetActive(false);
}

public static void AddInitializable2(GameObject initializable)
{
    try
    {
        instance.initializables.Add(initializable);
        initializable.SetActive(false);
    }
    catch
    {
        Debug.LogWarning("GameManager not found.");
    }
}

Example 2

public static void Init(int v)
{
    if(!HasInstance)
    {// this should happen only once
        instance = this;
    }

    instance.alj = v;
}

public static void Init2(int v)
{
    try
    {
        instance.alj = v;
    }
    catch
    {
        instance = this;
        Init(v);
    }
}

Edit:

Question 2: How many Exceptions can I get to be still performance positive?



from Recent Questions - Stack Overflow https://ift.tt/3r3GiY8
https://ift.tt/eA8V8J

Downloading google forms through python

I was trying to downloading Google Forms through Google Drive API but it turned out to be not possible. Now I am thinking of downloading the responses in a spreadsheet Mimetype but doesn't sure about how to do so. Or, would the Google Form API fulfill my request? Thanks.



from Recent Questions - Stack Overflow https://ift.tt/3qZI10q
https://ift.tt/eA8V8J

Avoid "where" clause grouping when applying multiple where clauses inside the same scope

I have this scope in my Model:

function extraFiltersScope($query){
    $query->where('first_name', 'test')->orWhere('name', 'testing');
    return $query;
}

I'm applying the clause like this:

$query = User::where('age', 30')->extraFilters()->toSql();

Expected SQL would be:

select * from users where age=30 and first_name='test' or name='testing'

I'm getting this:

select * from users where age=30 and (first_name='test' or name='testing')

It seems that that's the normal behavior since both "where" clauses are being applied inside the same scope. Is there a workaround to tell the builder to now group them?

Of course, my logic is much more complex than this, otherwise I could simply have a scope method for each one. I need to apply several filters on the same scope but without nesting.

Thanks.



from Recent Questions - Stack Overflow https://ift.tt/3tVCYzX
https://ift.tt/eA8V8J

Get a list of every Layer, in every Service, in every Folder in an ArcGIS REST endpoint

I have two ArcGIS REST endpoints for which I am trying to get a list of every layer:

https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services https://services1.arcgis.com/RLQu0rK7h4kbsBq5/ArcGIS/rest/services

These are not my organization's endpoints so I don't have access to them internally. At each of these endpoints there can be folders, services, and layers, or just services and layers.

My goal is to get a list of all layers. So far I have tried:

endpoints=(["https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services",
"https://services1.arcgis.com/RLQu0rK7h4kbsBq5/ArcGIS/rest/services"])

for item in endpoints:
    reqs = requests.get(item, verify=False) 
    # used this verify because otherwise I get an SSL error for endpoints[0]
    soup =BeautifulSoup(reqs.text, 'html.parser')

    layers = []
    for link in soup.find_all('a'):
        print(link.get('href'))
        layers.append(link)
     

However this doesn't account for the variable nested folders/services/layers or services/layer schemas, and it doesn't seem to be fully appending to my layers list.

I'm thinking I could also go the JSON route and append ?f=psjon . So for example:

https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services/?f=pjson would get me the folders https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services/broadband/?f=pjson would get me all the services in the broadband folder and https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services/broadband/CDC_5yr_OpioidOverDoseDeaths_2016/MapServer?f=pjson would get me the CDC_OverDoseDeathsbyCounty2016_5yr layer in the first service (CDC_5yr_OpioidOverDoseDeaths_2016) in the broadband folder.

Any help is appreciated. I put this here vs in the GIS stack exchange as it seems a more python question than geospatial.



from Recent Questions - Stack Overflow https://ift.tt/3G3xg1i
https://ift.tt/eA8V8J

2022-01-26

Is it possible to show an image in Java console?

I want to make a Java application that shows an image in the console, but I cannot find anything on the topic. How would I do this?



from Recent Questions - Stack Overflow https://ift.tt/3KJy0fO
https://ift.tt/eA8V8J

Adding quotes to text in line using Python

I am using Visual Studio Code to replace text with Python. I am using a source file with original text and converting it into a new file with new text.

I would like to add quotes to the new text that follows. For example:

Original text: set vlans xxx vlan-id xxx

New text: vlan xxx name "xxx" (add quotes to the remaining portion of the line as seen here)

Here is my code:

    with open("SanitizedFinal_E4300.txt", "rt") as fin:
        with open("output6.txt", "wt") as fout:
            for line in fin:
                     line = line.replace('set vlans', 'vlan').replace('vlan-id', 'name')
                     fout.write(line)

Is there a way to add quotes for text in the line that follows 'name'?

Edit:

I tried this code:

    with open("SanitizedFinal_E4300.txt", "rt") as fin:
    with open("output6.txt", "wt") as fout:
    for line in fin:
     line = line.replace('set vlans', 'vlan').replace('vlan-id', 'name')
     words = line.split()
        words[-1] = '"' + words[-1] + '"'
        line = ' '.join(words)
     fout.write(line)

and received this error:

line 124, in <module>
words[-1] = '"' + words[-1] + '"'
IndexError: list index out of range

I also tried this code with no success:

    with open("SanitizedFinal_E4300.txt", "rt") as fin:
    with open("output6.txt", "wt") as fout:
    for line in fin:
    line = line.replace('set vlans', 'vlan').replace('vlan-id', 'name')
    
    import re
        t = 'set vlans xxx  vlan-id xxx'
        re.sub(r'set vlans(.*)vlan-id (.*)', r'vlan\1names "\2"', t)
        'vlan xxx  names "xxx"'

Again, my goal is to automatically add double quotes to the characters (vlan numbers) at the end of a line.

For example:

Change this: set vlans default vlan-id 1

Into this: vlan default name "1"



from Recent Questions - Stack Overflow https://ift.tt/3AqrcyX
https://ift.tt/eA8V8J

OAuth authorization flow for private API with Microsoft AD

Our company is using Microsoft AD for user management and authentication, also the authorization is done using the AD groups/roles. In the most 3rd party applications the users can authenticate with their AD accounts. Now we are developing new applications, which should use an internal API. Therefor we created a new enterprise app in our Microsoft tenant and defined a couple of roles. On the client side it is the normal flow - users authenticate with their accounts and the client receives the access token it should send to the API. And here is the point where I am not sure what is the best way to implement it. Since all the users already exist in the AD, there is no need to only use access token to get the user identifier and create/link the user in the internal database - I want to use the AD users and to be able to verify the roles and use them in the services behind the API gateway "as is". But the roles are not stored in the access token, so I assume, I have to request them from Microsoft separately. But also I do not want to request them every time a user sends a request to my API and want to rely on the token the cliends sends to me and which I can verify.

So what is the best way to implement it? Should I create a new Bearer JWT in our own auth service, containing all information I need, and provide it to the client, so that it sends it to me every time? Should client use this token for authorizing the user as well? But it also can request the IDToken from Microsoft? Would our internal token replace IDToken and Access Token? Or should we just use IDToken for requests to the API? Creating an own token looks like an overhead to me, since we only work with AD users but I also don't want to use IDToken for authorization in the API.



from Recent Questions - Stack Overflow https://ift.tt/3FBRHCd
https://ift.tt/eA8V8J

I am making a post request through laravel-vue on the browser and I am getting a 401, but it works on postman with the exact headers & paramaters

this is the function that is triggered once the form button is clicked

      add_book: function(){
          axios.post('http://127.0.0.1:8000/api/v1/books', {
            headers: {
            'Content-Type': 'application/json',
            'Accept': 'application/json',
            'Authorization': 'Bearer' this.token,
            },
            params: {
                'name': this.name,
                'description': this.description,
                'publication_year': this.publication_year,
            }

            }).then((response => {
                if (response.status == 201){
                    this.name = "";
                    this.description = "";
                    this.publication_year = "";
                    console.log('a new book is added');
                }
            })).catch((error) => {
                console.log(error);
        });
        console.log(this.name);
      }
  }

and this is the vs_code thunder client (an extension that works like postman) request.

enter image description here post request on vs code enter image description here



from Recent Questions - Stack Overflow https://ift.tt/3qXkNbm
https://ift.tt/eA8V8J

Missing import(Rcpp) in NAMESPACE leads to C++ library error during R CMD check

Summary

I am working on an R package that uses Rcpp. I took over the project with many issues and I am trying to fix them. The problem with this is that I don't know how to create a minimal example for reproduction in this situation because the package is quite large and I was not involved in the early setup. I would appreciate suggestions on how to go about it, I am new to writing packages in R/Rcpp.

I got it into a state that it passes automated R CMD checks both on macOS and Linux in Github Actions.

There is a deprecated file named "R/simulate.R" that contains one function that is no longer used. I am trying to remove this file.

The relevant lines are:

...
#' @useDynLib mvMAPIT
#' @export
#' @import CompQuadForm
#' @import doParallel
#' @import Rcpp
#' @import RcppArmadillo
#' @import Matrix
#' @import mvtnorm
#' @import PHENIX
simulate <- function(...) {...}

I used devtools::document() to update the autogenerated files in the package.

With this, the lines

import(Matrix)
import(PHENIX)
import(Rcpp)
import(RcppArmadillo)
import(doParallel)
import(mvtnorm)

were removed from the file NAMESPACE.

After the removal, when I run R CMD check . on macOS-latest, I get the following error:

 * checking tests ... ERROR
  Running ‘testthat.R’
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
  > library(testthat)
  > library(myPackage)
  >
  > test_check("myPackage")
  libc++abi: __cxa_guard_acquire detected recursive initialization

Running R CMD check . on ubuntu-20.4 gives the following error when checking tests:

Error: <rlib_error_2_0 in process_get_error_connection(self, private):
 stderr is not a pipe.>

Removal steps

  • git rm R/simulate.R
  • in R devtools::document() leads to the following changes:
     modified:   NAMESPACE
     deleted:    R/simulate.R
     deleted:    man/simulate.Rd
    
  • R CMD check . produces the above error.

What I tried

I found this issue with a similar problem and therefore tried to reinstall packages with install.packages(c('Rcpp', 'RcppArmadillo', 'httpuv'))

The issue persists.

I tried git grep -nrw "simulate" to search for the function that was defined in the file to find forgotten use of the file but nothing shows up.

Progress update

Instead of running devtools::document(), I only deleted the line export(simulate) manually from the file NAMESPACE. With this, the lines

import(Matrix)
import(PHENIX)
import(Rcpp)
import(RcppArmadillo)
import(doParallel)
import(mvtnorm)

remain in the file NAMESPACE.

These lines were autogenerated from annotations to the function that I removed by deleting R/simulate.R:

...
#' @useDynLib mvMAPIT
#' @export
#' @import CompQuadForm
#' @import doParallel
#' @import Rcpp
#' @import RcppArmadillo
#' @import Matrix
#' @import mvtnorm
#' @import PHENIX
simulate <- function(...) {...}

Now, R CMD check . runs correctly.

I guess this means I do not understand the annotations and the NAMESPACE yet and there is another dependency that requires these imports in the NAMESPACE.

If there is a problem with how I am asking the question, I would be happy to get feedback as well. I am also new to posting a question.

Thank you!



from Recent Questions - Stack Overflow https://ift.tt/3KFjwgO
https://ift.tt/eA8V8J

2022-01-25

Replace colors in image by closest color in palette using numpy

I have a list of colors, and I have a function closest_color(pixel, colors) where it compares the given pixels' RGB values with my list of colors, and it outputs the closest color from the list.

I need to apply this function to a whole image. When I try to use it pixel by pixel, (by using 2 nested for-loops) it is slow. Is there a better way to achieve this with numpy?



from Recent Questions - Stack Overflow https://ift.tt/3FUsfYZ
https://ift.tt/eA8V8J

ForEach in SwiftUI: Error "Missing argument for parameter #1 in call"

I'm still trying to create a calendar app and ran into another problem I wasn't able to find a solution for online.

Xcode throws the error "Missing argument for parameter #1 in call" in line 2 of my code sample. I used similar code at other places before, but I can't find the difference in this one.

Also this code worked before and started throwing this error after I moved some code to the new View DayViewEvent after getting the error "The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions", which I hoped to fix it and would be clean code anyway.

For reference: events is an optional array of EKEvents (hence the first if) and selectedDate is (obviously) a Date.

Any help is greatly appreciated!

if let events = events {
    ForEach(events, id: \.self) { event in
        if !event.isAllDay {
            DayViewEvent(selectedDate: selectedDate, event: event)
        }
    }
}


from Recent Questions - Stack Overflow https://ift.tt/3nRhxwg
https://ift.tt/eA8V8J

How to delete data from file and move all info back

I created a file with fopen and I deleted the first value from the file and I want to take all of the values that are in the file and move them to the start of the file.

Example:

  • File: [Info,data,string]

  • Wanted: [data,string,]

  • What is happening: [,data,string]

Any help will be great.



from Recent Questions - Stack Overflow https://ift.tt/32tTLPp
https://ift.tt/eA8V8J

With nestjs / node / npm project how to override a transitive dependency

I have a nestjs / node / npm project and trying to override a transitive dependency due to security vulnerability.

The project that seems to include it is:

"@nestjs/common": "7.6.18",

And that project includes axios 0.21.1, I want to upgrade to axios 0.21.2

In my package.json I tried using the overrides feature with the following.

},
"overrides": {
    "axios": "0.21.2"
},
"jest": {

But then I get this entry when I run npm list.

npm list --depth=4 

│ ├─┬ axios@0.21.1 invalid: "0.21.2" from node_modules/@nestjs/common

And only seems to include axios 0.21.2.

How do I upgrade a transitive dependency?

I am mostly using the nest wrappers:

nest build, etc

npm --version - 8.3.1

node --version - v17.4.0



from Recent Questions - Stack Overflow https://ift.tt/35jOBqe
https://ift.tt/eA8V8J

How to call a method in another module JavaScript?

I have a class with method performing login

LoginPage.js

class loginPage {
  fillCredentials(username, password) {
    cy.get('[id=username]').type(username);
    cy.get('[id=password]').type(password);
    return this;
  }

  clickLogin() {
    cy.contains("Login").click();
  }
}
export default loginPage;

I have another spec file for testing:

login.spec.js

import {fillCredentials,clickLogin} from '../../support/PageObjects/loginPage'

describe('User Onboarding Emails', () => {
  it('Verification email', () => {
    cy.visit('/')
    fillCredentials('username','password')
    clickLogin()
  });
});

However, it is giving an error of

(0 , _loginPage.fillCredentials) is not a function

I know its a wrong way of calling a method. Is there any way I can use the methods without creating an instance of class to access methods



from Recent Questions - Stack Overflow https://ift.tt/3nJaINe
https://ift.tt/eA8V8J

2022-01-24

Registering and Resolving a Service with delegate type constructor parameter using Structuremap

I have following service class:

public class MyService : IService
{
   public MyService(Func<string,bool> question)
   {
      ....
   }
   ...
}

When I use this service in my WinForms application I want to pass following code as MyService constructor parameter

(string question) => 
{ 
   var questionForm = new SimpleQuestionForm(question);
   if(questionForm.ShowDialog() == DialogResult.OK)
      return true;
   else
      return false; 
}

How can I tell to the StructureMap that what is my question delegate?



from Recent Questions - Stack Overflow https://ift.tt/3rFkCAB
https://ift.tt/eA8V8J

How can I reindex elasticsearch?

I am using jhipster 7.5.0 and I tryed to install generator-jhipster-elasticsearch-reindexer, but seems it does not work with this jhipster version. I am trying to reindex all elasticsearch indexes manually, but I do not know how can I do that.



from Recent Questions - Stack Overflow https://ift.tt/3qTSEBZ
https://ift.tt/eA8V8J

Rotating and scaling an image around a pivot, while scaling width and height separately in Pygame

I have a set of keyframes in a list that look like this:

   [{
        "duration" : 20,
        "position" : [0,0],
        "scale" : [1, 1],
        "angle" : 0,
        "rgba" : [255,255,255,255]
    },
    {
        "duration" : 5,
        "position" : [0,0],
        "scale" : [1, 1.5],
        "angle" : 50,
        "rgba" : [255,255,255,255]
    }]

The idea is being able to do the corresponding transformations every frame. Notice that scale is separated between width and height.
The problem comes form trying to scale width and height independently, while still rotating around a pivot.

I tried modifying some code from: (How to rotate an image around its center while its scale is getting larger(in Pygame))

def blitRotate(surf, image, pos, originPos, angle, zoom):

    # calcaulate the axis aligned bounding box of the rotated image
    w, h       = image.get_size()
    box        = [pygame.math.Vector2(p) for p in [(0, 0), (w, 0), (w, -h), (0, -h)]]
    box_rotate = [p.rotate(angle) for p in box]
    min_box    = (min(box_rotate, key=lambda p: p[0])[0], min(box_rotate, key=lambda p: p[1])[1])
    max_box    = (max(box_rotate, key=lambda p: p[0])[0], max(box_rotate, key=lambda p: p[1])[1])

    # calculate the translation of the pivot 
    pivot        = pygame.math.Vector2(originPos[0], -originPos[1])
    pivot_rotate = pivot.rotate(angle)
    pivot_move   = pivot_rotate - pivot

    # calculate the upper left origin of the rotated image
    move   = (-originPos[0] + min_box[0] - pivot_move[0], -originPos[1] - max_box[1] + pivot_move[1])
origin = (pos[0] + zoom * move[0], pos[1] + zoom * move[1])

# get a rotated image
rotozoom_image = pygame.transform.rotozoom(image, angle, zoom)

# rotate and blit the image
surf.blit(rotozoom_image, origin)

# draw rectangle around the image
pygame.draw.rect (surf, (255, 0, 0), (*origin, *rotozoom_image.get_size()),2)

but i'm struggling trying to think of the math necessary to make it work, i've tried separating zoom into a dupe, and then instead of doing rotozoom , scaling first with transform.scale and then transform.rotate afterwards but that didn't work either.

To better illustrate what i mean, it would be something like this:
rotating around pivot while changing width and height

It changes it's width and height but the pivot stays the same



from Recent Questions - Stack Overflow https://ift.tt/3GTXYef
https://ift.tt/3AnVKBn

Playwright : How to run the same test on multiple url in the same browser on different tabs and in parallel

I'm looking to run the same test for serverals url (~20) and I want to be the quickest as possible.

I would like to run my 20 tests in parallel in one browser and in a new tab (page) for each but I can't achieve it.

Here my code that open a new browser for each test :

const urlList: string[] = [
  'url1',
  'url2',
  ...
];

test.describe.parallel("Same test for multiple url", async () => {

  let context;

  test.beforeAll(async ({ browser }) => {
    context = await browser.newContext();
  });

  for (const url of urlList) {

    test(`${url}`, async () => {
      let page = await context.newPage();
      await page.goto(url);
    });

  }

});
 


from Recent Questions - Stack Overflow https://ift.tt/32p2P82
https://ift.tt/eA8V8J

To understand the regular expression used in Webpack's SplitChunksPlugins CacheGroup [duplicate]

I'm trying to migrate Webpack 3 to Webpack 4 which forces us to use Split Chunks Plugin.

Split Chunks Plugin uses cacheGroups object as a way to group the chunks together. In any of those cache groups, there is a test property which says

vendors: {
  test: /[\\/]node_modules[\\/]/,
  priority: -10
}

My question is what is [\\/] in the regular expression. I know forward slashes should be escaped because they are regular expression's reserved characters but IMO, it should be

vendors: {
  test: /\/node_modules\//,
  priority: -10
}

Can anyone please explain the difference?



from Recent Questions - Stack Overflow https://ift.tt/3tNaJmS
https://ift.tt/eA8V8J

2022-01-23

Getting the categoryId of a post in Graphql

This is my Graphql query for getting posts from headless wordpress:

export const GET_POSTS = gql`
 query GET_POSTS( $uri: String, $perPage: Int, $offset: Int, $categoryId: Int ) {
 
  posts: posts(where: { categoryId: $categoryId, offsetPagination: { size: $perPage, offset: $offset }}) {
    edges {
      node {
        id
        title
        excerpt
        slug
        featuredImage {
          node {
            ...ImageFragment
          }
        }
        categories {
          edges {
            node {
              categoryId
              name
            }
          }
        }
      }
    }
    pageInfo {
      offsetPagination {
        total
      }
    }
  }
 }
 
 ${ImageFragment}
 
 `;

When i do this: console.log("DATAAAA", data.posts.edges);

i get:

    DATAAAA [
  {
    node: {
      id: 'cG9zdDo0MA==',
      title: 'postttt',
      excerpt: '<p>dlkfjdsflkdslkdjfkldsf</p>\n',
      slug: 'postttt',
      featuredImage: null,
      categories: [Object],
      __typename: 'Post'
    },
    __typename: 'RootQueryToPostConnectionEdge'
  },
  {
    node: {
      id: 'cG9zdDox',
      title: 'Hello world!',
      excerpt: '<p>Welcome to WordPress. This is your first post. Edit or delete it, then start writing!</p>\n',
      slug: 'hello-world',
      featuredImage: null,
      categories: [Object],
      __typename: 'Post'
    },
    __typename: 'RootQueryToPostConnectionEdge'
  }
]

But when try to go further, inside node, like this: console.log("DATAAAA", data.posts.edges.node); in order to get the categoryId which is inside categories: [Object], i get undefined.

How to get categoryId based on this query?

What i want to do is to get only the posts by a given category in getStaticProps like this, but i dont know how to get that categoryId dinamically. This is what my getStaticProps function looks like:

export async function getStaticProps(context) {
  console.log("sfsdfdsfdsf", context);
  const { data, errors } = await client.query({
    
    query: GET_POSTS,
    variables: {
      uri: context.params?.slug ?? "/",
      perPage: PER_PAGE_FIRST,
      offset: null,
      categoryId: <===== How to get this dinamically?
    },
  });

  const defaultProps = {
    props: {
      data: data || {},
    },
    
    revalidate: 1,
  };

  return handleRedirectsAndReturnData(defaultProps, data, errors, "posts");
}

This is my getStaticPaths function:

export async function getStaticPaths() {
  const { data } = await client.query({
    query: GET_CATEGORY_SLUGS_ID,
  });

  const pathsData = [];

  data?.categories?.edges.node &&
    data?.categories?.edges.node.map((category) => {
      if (!isEmpty(category?.slug)) {
        pathsData.push({ params: { slug: category?.slug } });
      }
    });

  return {
    paths: pathsData,
    fallback: FALLBACK,
  };
}

and this is what i get from context console.log("THE CONTEXT", context);:

THE CONTEXT {
  params: { slug: 'uncategorized' },
  locales: undefined,
  locale: undefined,
  defaultLocale: undefined
}

Any help would be appreciated.



from Recent Questions - Stack Overflow https://ift.tt/3IxUQoJ
https://ift.tt/eA8V8J

Bootstrap tooltip not triggering on badge

I have a Laravel app for which I'm trying to trigger a Bootstrap tooltip on a badge, but am getting the following error in the console:

Uncaught ReferenceError: Tooltip is not defined

I'm importing Popper and JavaScript components in resources/js/bootstrap.js as per the Bootstrap 5 documentation:

import Modal from "bootstrap/js/dist/modal.js";
import Collapse from "bootstrap/js/dist/collapse.js";
import Tooltip from "bootstrap/js/dist/tooltip.js";
import Popper from "@popperjs/core/dist/umd/popper";
  
try {
    window.Popper = Popper;
    /* window.Popper = require("@popperjs/core");  # This doesn't work either */

    window.$ = window.jQuery = require("jquery");
    require("bootstrap");
} catch (e) {}

I'm initialising the tooltips using the following code from the documentation in resources/js/main.js with a DOMContentLoaded event listener around it:

document.addEventListener("DOMContentLoaded", function () { var tooltipTriggerList = [].slice.call( document.querySelectorAll('[data-bs-toggle="tooltip"]') ); var tooltipList = tooltipTriggerList.map(function (tooltipTriggerEl) { return new Tooltip(tooltipTriggerEl); }); });

I've run npm run dev and have the following in webpack.mix.js:

mix.js('resources/js/app.js', 'public/js')
    .sass('resources/sass/app.scss', 'public/css')
    .postCss('resources/css/app.css', 'public/css')
    .sourceMaps();

And finally I reference both files and add the HTML in my markup:

<head>
    <script src="" defer></script>
    <script src="" defer></script>
</head>

<body>

...

<div class="text-center mt-3 mb-3 mb-lg-0">
<span class="badge" data-bs-toggle="tooltip" data-bs-placement="bottom" title="My tooltip"><i class="fas fa-lock"></i> SECURE PAYMENT</span>
</div>

</body>

So I've tried pretty much everything I can think of for now and am at a loss. What am I missing?



from Recent Questions - Stack Overflow https://ift.tt/3fYZiAJ
https://ift.tt/eA8V8J

File reader finding the average of the values from a txt file of the rows and columns without using arrays

I am given a txt file 7x3 grid of values and I'm supposed to find the average of each rows (7) and columns (3) without using arrays. The professor has guided us to printing the grid out but I'm not sure what to do next.

public static void main (String [] args){
    try{
        File file = new File("Cal.txt");
        Scanner scanFile = new Scanner(file);
        for (int i = 0; i < 7; i++){
            String string = scanFile.nextLine();
            System.out.println(string);
                    
        }

    }catch(Exception e) {
        System.out.println("Error occured...");
    }

}

The grid:

 40.0 30 10 
 25 76 1120
 0 1301 1823
 630 300 1000
 102 1100 1900
 982 200 239
 200 720 100


from Recent Questions - Stack Overflow https://ift.tt/3GSyfTe
https://ift.tt/eA8V8J

HttpMessageConverter for Single Object and List of Object

I have an Object (here: Property) and I want to add csv export ability to my Spring Backend for single and list of objects.

I added this to my config:

@Override
public void configureContentNegotiation(ContentNegotiationConfigurer configurer) {
    configurer.(more config).mediaType("csv", new MediaType("text", "csv"));
}

@Override
public void extendMessageConverters(List<HttpMessageConverter<?>> converters) {
     converters.add(new PropertyConverter());
     converters.add(new StatsConverter());
}

and the Property Converter looks like this:

public class PropertyConverter extends AbstractGenericHttpMessageConverter<Property> {

    private static final Logger LOGGER = LoggerFactory.getLogger(PropertyConverter.class);

    public PropertyConverter() {
        super(new MediaType("text", "csv"));
    }

    @Override
    protected void writeInternal(Property property, Type type, HttpOutputMessage outputMessage) throws IOException, HttpMessageNotWritableException {
        try (var writer = new OutputStreamWriter(outputMessage.getBody())) {
            new StatefulBeanToCsvBuilder<>(writer).withSeparator(',').build().write(property);
        } catch (CsvDataTypeMismatchException | CsvRequiredFieldEmptyException ex) {
            LOGGER.error("CSV failed to convert property: ".concat(property.getExternalId()).concat(", exception: ".concat(ex.toString())));
        }
    }

    @Override
    protected Property readInternal(Class<? extends Property> clazz, HttpInputMessage inputMessage) throws IOException, HttpMessageNotReadableException {
        return null;
    }

    @Override
    public Property read(Type type, Class<?> contextClass, HttpInputMessage inputMessage) throws IOException, HttpMessageNotReadableException {
        return null;
    }
}

This code works for a Single Property. When I try to return a list of Properties:

org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.ClassCastException: class java.util.LinkedHashMap cannot be cast to class eu.webeng.model.Property (java.util.LinkedHashMap is in module java.base of loader 'bootstrap'; eu.webeng.model.Property is in unnamed module of loader 'app')
    ...

Caused by: java.lang.ClassCastException: class java.util.LinkedHashMap cannot be cast to class eu.webeng.model.Property (java.util.LinkedHashMap is in module java.base of loader 'bootstrap'; eu.webeng.model.Property is in unnamed module of loader 'app')
    at eu.webeng.converter.PropertyConverter.writeInternal(PropertyConverter.java:20) ~[classes/:na]
    at org.springframework.http.converter.AbstractGenericHttpMessageConverter.write(AbstractGenericHttpMessageConverter.java:104) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
    at org.springframework.web.servlet.mvc.method.annotation.AbstractMessageConverterMethodProcessor.writeWithMessageConverters(AbstractMessageConverterMethodProcessor.java:287) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
    at org.springframework.web.servlet.mvc.method.annotation.HttpEntityMethodProcessor.handleReturnValue(HttpEntityMethodProcessor.java:219) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
    at org.springframework.web.method.support.HandlerMethodReturnValueHandlerComposite.handleReturnValue(HandlerMethodReturnValueHandlerComposite.java:82) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
    ....

I tried to add a PropertyListConverter but then it doesn't work for Single Property. When I tried to add both, the first converter added is being used.

How can I make the Converter work for Single and List of Property (or any Object)



from Recent Questions - Stack Overflow https://ift.tt/33I31zN
https://ift.tt/eA8V8J

SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'user_id' cannot be null?

I got this error when try to seed database.

Laravel 7.

BlogPost Model

class BlogPost extends Model
{
    protected $fillable = [
        'title',
        'slug',
        'user_id',
        'category_id',
        'excerpt',
        'content_raw',
        'content_html',
        'is_published',
        'published_at',
        'updated_at',
        'created_at',
    ];

    public function category()
    {
        return $this->belongsTo(BlogCategory::class);
    }

    public function user()
    {
        return $this->belongsTo(User::class);
    }
}

User model

class User extends Authenticatable
{
    use Notifiable;

    /**
     * The attributes that are mass assignable.
     *
     * @var array
     */
    protected $fillable = [
        'name', 'email', 'password',
    ];

    /**
     * The attributes that should be hidden for arrays.
     *
     * @var array
     */
    protected $hidden = [
        'password', 'remember_token',
    ];

    /**
     * The attributes that should be cast to native types.
     *
     * @var array
     */
    protected $casts = [
        'email_verified_at' => 'datetime',
    ];
}

User migration

        Schema::create('users', function (Blueprint $table) {
            $table->id();
            $table->string('name');
            $table->string('email')->unique();
            $table->timestamp('email_verified_at')->nullable();
            $table->string('password');
            $table->rememberToken();
            $table->timestamps();
        });

BlogPost migration

        Schema::create('blog_posts', function (Blueprint $table) {
            $table->increments('id');
            $table->unsignedInteger('category_id');
            $table->foreignId('user_id')->constrained();
            $table->string('title');
            $table->string('slug')->unique();
            $table->text('excerpt')->nullable();
            $table->text('content_raw');
            $table->text('content_html');
            $table->boolean('is_published')->default(false)->index();
            $table->timestamp('published_at')->nullable();
            $table->foreign('category_id')->references('id')->on('blog_categories');
            $table->timestamps();
        });

User seeder

class UserTableSeeder extends Seeder
{
    public function run()
    {
        $users = [
            [
                'name' => 'Author',
                'email' => 'seriiburduja@mail.ru',
                'password' => bcrypt('some1234')
            ],
            [
                'name' => 'Admin',
                'email' => 'seriiburduja@gmail.com',
                'password' => bcrypt('some1234')
            ]
        ];

        DB::table('users')->insert($users);
    }
}

BlogPost Factory

$factory->define(BlogPost::class, function (Faker $faker) {
    $title = $faker->sentence(rand(3, 8), true);
    $text = $faker->realText(rand(1000, 4000));
    $isPublished = rand(1, 5) > 1;
    $createdAt = $faker->dateTimeBetween('-6 months', '-1 day');

    return [
        'category_id' => rand(1, 10),
        'user_id' => 1,
        'title' => $title,
        'slug' => Str::slug($title),
        'excerpt' => $faker->text(rand(100, 400)),
        'content_raw' => $text,
        'content_html' => $text,
        'is_published' => $isPublished,
        'published_at' => $isPublished ? $faker->dateTimeBetween('-6 months', '-1day') : null,
        'created_at' => $createdAt,
        'updated_at' => $createdAt
    ];
});

DatabaseSeeder

class DatabaseSeeder extends Seeder
{
    /**
     * Seed the application's database.
     *
     * @return void
     */
    public function run()
    {
        $this->call(UserTableSeeder::class);
        $this->call(BlogCategorySeeder::class);
        factory(BlogPost::class, 1)->create();
    }
}

When i run php artisan migrate:fresh --seed i got this error.

Tables users and blog_categories seeds successfully, but error appear for blog_categories.

I don't understand why.

Field user_id exists in $fillable in BlogPost Model.

If i change migration for blog_posts and add a nullable for user_id, than seed work, but user_id is null. But i don't need that.

Thansk in advance.



from Recent Questions - Stack Overflow https://ift.tt/3rJC2w1
https://ift.tt/eA8V8J

2022-01-22

Select data and name when pointing it chart with ggplotly

I did everything in ggplot, and it was everything working well. Now I need it to show data when I point a datapoint. In this example, the model (to identify point), and the disp and wt ( data in axis). For this I added the shape (same shape, I do not actually want different shapes) to model data. and asked ggplot not to show shape in legend. Then I convert to plotly. I succeeded in showing the data when I point the circles, but now I am having problems with the leyend showing colors and shapes separated with a comma...

I did not wanted to make it again from scrach in plotly as I have no experience in plotly and this is part of a much larger shiny proyect, where the chart adjust automatically the axis scales and adds trend lines the the chart among other things (I did not include for symplicity) that I do not know how to do it in plotly.

Many thanks in advance. I have tryed a million ways for a couple of days now, and did not succeed.

# choose mtcars data and add rowname as column  as I want to link it to shapes in ggplot
data1 <- mtcars
data1$model <- rownames(mtcars)
# I turn cyl data to character as when charting it showed (Error: Continuous value supplied to discrete scale)
data1$cyl <- as.character(data1$cyl)

# linking colors with cylinders and shapes with models
ccolor <- c("#E57373","purple","green")
cylin <- c(6,4,8)  
# I actually do not want shapes to be different, only whant to show data of model when I point the data point. 
models <- data1$model
sshapes <- rep(16,length(models)) 



# I am going to chart, do not want leyend to show shape
graff <- ggplot(data1,aes(x=disp, y=wt,shape=model,col=cyl)) + 
  geom_point(size = 1) +
  ylab ("eje y") + xlab('eje x') +
  scale_color_manual(values= ccolor, breaks= cylin)+
  scale_shape_manual(values = sshapes, breaks = models)+
  guides(shape='none') # do not want shapes to show in leyend


graff

chart is fine, but when converting to ggplotly, I am having trouble with the legend

# chart is fine, but when converting to ggplotly, I am having trouble with the legend

graffPP <- ggplotly(graff)
graffPP

legend is not the same as it was in ggplot

I succeeded in showind the model and data from axis when I point a datapoint in the chart... but now I am having problems with the leyend....



from Recent Questions - Stack Overflow https://ift.tt/3fNSpC8
https://ift.tt/eA8V8J

How can I convert a hash representation in an escaped string to an actual hash?

I have a field stored in a MySQL db that is a string, but it is storing a representation of a hash. I'm wondering how I could convert this string into a hash. I have tried a bunch of different things with gsub and JSON.parse to no avail.

Here is a reference to what I'm trying to convert:

=> "{:address=>\"\", :city=>\"\", :country=>\"\", :zip=>\"\", :state=>\"\", :industry=>\"\", :org=>\"\", :job_title=>\"\", :purchasing_time_frame=>\"\", :role_in_purchase_process=>\"\", :no_of_employees=>\"\", :comments=>\"\", :custom_questions=>[{\"title\"=>\"License Number\", \"value\"=>\"345g3245\"}, {\"title\"=>\"License Type\", \"value\"=>\"Legal\"}], :create_time=>\"2022-01-17T22:49:26Z\"}" 


from Recent Questions - Stack Overflow https://ift.tt/3GQ9YgM
https://ift.tt/eA8V8J

Filtering, subsetting, and matching data from two separate data frames based on the ID and the period between the dates in R

I have two data frames:

  1. The first contains information on hospitalization and has 3706 observations:
    1   2019-08-22 15:06:00 2019-10-09 12:00:00       1565
    2   2019-08-22 16:15:00 2019-09-12 12:33:00          3
    3   2019-08-22 20:00:00 2019-10-08 12:00:00       1408
    4   2019-08-23 14:00:00 2019-11-22 13:40:00       1566
    5   2019-08-23 15:30:00 2019-10-14 16:20:00       1567
    6   2019-08-24 12:30:00 2019-09-19 12:11:00        268
    7   2019-08-26 14:15:00 2019-09-24 13:50:00       1568
    8   2019-08-26 15:50:00 2019-10-29 13:47:00        161
    9   2019-08-26 17:51:00 2019-09-19 14:00:00       1569
    10  2019-08-26 19:30:00 2020-01-20 16:10:00       1570
    11  2019-08-26 20:45:00 2019-09-17 11:00:00       1571
    12  2019-08-26 21:10:00 2020-01-10 14:30:00        702
    13  2019-08-27 14:25:00 2019-09-24 11:10:00       1572
    14  2019-08-27 16:46:00 2019-08-30 15:18:00       1573
    15  2019-08-27 19:45:00 2019-09-02 13:45:00       1574
    16  2019-08-27 23:10:00 2019-10-03 14:55:00       1304
    17  2019-08-28 10:00:00 2019-09-18 14:20:00       1575
    18  2019-08-28 15:41:00 2019-10-02 11:35:00       1576
    19  2019-08-28 21:00:00 2019-10-11 14:10:00       1577
    20  2019-08-29 12:23:00 2019-09-24 12:20:00       1578
    21  2019-08-29 19:30:00 2019-09-25 12:30:00        599
    22  2019-08-30 10:40:00 2019-11-08 13:00:00       1495
    23  2019-08-30 12:40:00 2019-09-23 12:20:00         33
    24  2019-08-30 15:00:00 2019-10-14 13:25:00       1435
    25  2019-08-30 16:00:00 2019-09-27 15:25:00       1579
    26  2019-08-30 17:20:00 2019-09-20 12:00:00       1555
    27  2019-08-31 17:30:00 2019-09-12 09:00:00       1399
    28  2019-09-02 03:25:00 2019-09-09 14:45:00       1580
    29  2019-09-02 12:59:00 2019-10-30 10:10:00       1418
    30  2019-09-02 18:20:00 2019-09-20 16:10:00        766
    31  2019-09-02 23:58:00 2019-11-22 13:58:00       1581
    32  2019-09-03 11:15:00 2019-10-22 10:00:00        519
    33  2019-09-03 17:00:00 2019-10-18 13:30:00       1582
    34  2019-09-04 12:20:00 2019-11-20 12:11:00       1583
    35  2019-09-04 13:30:00 2019-10-18 12:25:00       1584
    36  2019-09-04 14:00:00 2019-10-21 11:35:00       1585
    37  2019-09-05 12:12:00 2019-10-01 13:15:00       1586
    38  2019-09-04 14:00:00 2019-12-17 13:20:00       1561
    39  2019-09-04 21:51:00 2019-11-18 14:06:00       1414
    40  2019-09-04 23:50:00 2019-10-01 13:00:00       1587
    41  2019-09-05 22:00:00 2019-09-27 11:14:00       1588
    42  2019-09-06 19:05:00 2019-10-21 13:40:00       1589
    43  2019-09-07 04:20:00 2019-10-08 14:00:00       1590
    44  2019-09-09 01:18:00 2019-09-19 12:20:00       1591
    45  2019-09-10 12:00:00 2019-10-16 10:15:00        409
    46  2019-09-10 14:15:00 2019-10-16 14:30:00        279
    47  2019-09-10 16:00:00 2019-09-11 11:40:00       1592
    48  2019-09-11 15:00:00 2019-10-03 14:50:00       1593
    49  2019-09-12 01:25:00 2019-12-16 14:30:00       1594
    50  2019-09-12 14:30:00 2019-10-07 12:30:00       1595
    51  2019-09-12 20:15:00 2019-09-22 18:40:00       1046
    52  2019-09-13 02:08:00 2019-10-18 13:30:00       1596
    53  2019-09-13 12:00:00 2019-10-23 11:30:00       1597
    54  2019-09-13 16:55:00 2019-09-27 08:09:00         94
    55  2019-09-13 20:00:00 2019-10-18 14:00:00       1211
    56  2019-09-13 23:55:00 2019-11-05 12:31:00       1598
    57  2019-09-14 03:30:00 2019-10-24 13:30:00       1599
    58  2019-09-14 10:14:00 2019-11-06 12:20:00       1600
    59  2019-09-14 11:35:00 2019-10-15 13:20:00         44
    60  2019-09-14 16:30:00 2019-09-30 12:24:00        473
    61  2019-09-14 22:00:00 2019-10-24 13:30:00       1601
    62  2019-09-15 11:50:00 2019-11-26 09:48:00        274
    63  2019-09-16 09:40:00 2019-09-30 13:40:00       1202
    64  2019-09-16 14:30:00 2019-11-12 13:56:00       1602
    65  2019-09-16 18:39:00 2019-10-21 14:55:00       1603
    66  2019-09-17 11:05:00 2019-10-09 14:19:00       1604
    67  2019-09-17 23:55:00 2019-12-03 11:50:00        443
    68  2019-09-18 15:30:00 2019-10-16 14:15:00       1605
    69  2019-09-18 16:50:00 2019-12-06 13:34:00       1606
    70  2019-09-19 10:40:00 2019-12-13 12:07:00       1607
    71  2019-09-19 11:55:00 2019-12-23 12:30:00       1608
    72  2019-09-19 15:30:00 2019-09-30 10:25:00       1609
    73  2019-09-19 17:08:00 2019-10-09 14:00:00       1413
    74  2019-09-19 21:58:00 2019-10-16 13:22:00       1610
    75  2019-09-20 09:30:00 2019-11-29 13:45:00       1541
    76  2019-09-21 17:30:00 2019-10-18 12:30:00       1611
    77  2019-09-21 19:00:00 2019-09-26 12:10:00       1612
    78  2019-09-22 08:30:00 2019-12-05 13:30:00       1613
    79  2019-09-22 13:00:00 2019-12-05 18:39:00       1614
    80  2019-09-23 16:10:00 2019-10-14 14:50:00       1615
    81  2019-09-23 19:10:00 2019-11-05 15:11:00       1616
    82  2019-09-24 14:30:00 2019-10-22 13:30:00        522
    83  2019-09-24 16:41:00 2019-11-08 12:00:00       1238
    84  2019-09-24 17:45:00 2019-10-29 14:01:00       1617
    85  2019-09-25 12:50:00 2019-10-25 12:30:00       1618
    86  2019-09-25 14:50:00 2019-12-23 17:00:00       1619
    87  2019-09-25 16:15:00 2019-11-21 14:44:00        510
    88  2019-09-25 21:30:00 2019-11-08 12:39:00        969
    89  2019-09-26 10:32:00 2019-10-21 12:20:00       1620
    90  2019-09-27 10:44:00 2019-12-27 13:37:00       1621
    91  2019-09-27 18:00:00 2019-10-17 15:10:00       1622
    92  2019-09-28 05:49:00 2019-10-07 13:30:00       1623
    93  2019-09-29 16:45:00 2019-10-23 13:30:00         94
    94  2019-09-29 19:00:00 2019-10-03 13:00:00       1535
    95  2019-09-29 21:50:00 2019-10-09 14:00:00       1624
    96  2019-09-30 11:50:00 2019-10-07 14:15:00       1625
    97  2019-09-30 13:20:00 2019-10-18 13:30:00       1626
    98  2019-09-30 13:50:00 2019-10-30 12:40:00       1627
    99  2019-10-01 12:45:00 2019-10-29 14:20:00       1555
    100 2019-10-01 13:15:00 2019-10-22 14:00:00       1628
    101 2019-10-01 19:10:00 2019-10-17 13:40:00        935
  1. The second one contains data from the results of tests carried out during hospitalization an has 7931 observations.
              Test_date Value Patient_ID
1   2019-10-21 11:39:00  2.23       1614
2   2019-10-21 11:39:00  5.25         51
3   2019-10-21 11:05:00  4.63       1644
4   2019-10-21 11:05:00  4.65       1617
5   2019-10-21 11:05:00  3.37       1656
6   2019-10-21 10:37:00  2.06       1594
7   2019-10-21 10:37:00  7.24       1649
8   2019-10-21 10:37:00  2.44       1619
9   2019-10-21 10:37:00  4.27       1621
10  2019-10-21 10:37:00  6.15       1581
11  2019-10-21 10:37:00  3.28        443
12  2019-10-21 10:37:00  2.22       1406
13  2019-10-21 10:37:00  3.90       1551
14  2019-10-18 11:00:00  4.83       1585
15  2019-10-18 11:00:00  2.43       1626
16  2019-10-18 11:00:00  2.13       1620
17  2019-10-18 11:00:00  4.48       1628
18  2019-10-18 11:00:00  4.63       1637
19  2019-10-18 11:00:00  1.87        510
20  2019-10-17 11:12:00  1.70       1389
21  2019-10-17 11:12:00  3.24       1596
22  2019-10-17 11:12:00  5.00       1647
23  2019-10-17 11:11:00  2.69       1418
24  2019-10-17 11:11:00  3.32       1584
25  2019-10-17 11:11:00  2.80       1211
26  2019-10-16 10:15:00  5.83       1646
27  2019-10-16 10:15:00  2.22       1472
28  2019-10-16 10:15:00  3.29       1495
29  2019-10-16 10:15:00  4.00       1605
30  2019-10-16 10:15:00  4.99         12
31  2019-10-16 10:15:00  3.29       1645
32  2019-10-16 10:15:00  2.54       1582
33  2019-10-16 10:15:00  4.31       1618
34  2019-10-15 11:11:00  3.26       1610
35  2019-10-15 11:11:00  3.64       1598
36  2019-10-15 10:32:00  2.45        409
37  2019-10-15 10:32:00  2.45       1643
38  2019-10-15 10:32:00  2.06       1640
39  2019-10-15 10:32:00  4.96       1644
40  2019-10-15 10:31:00  4.87        279
41  2019-10-14 10:54:00  2.30       1614
42  2019-10-14 10:54:00  7.86       1638
43  2019-10-14 10:46:00  2.35       1641
44  2019-10-14 10:46:00  5.16       1644
45  2019-10-14 10:46:00  4.08       1631
46  2019-10-14 10:46:00  1.97       1615
47  2019-10-14 10:45:00  3.85       1621
48  2019-10-14 10:45:00  2.75         44
49  2019-10-14 10:45:00  1.92       1642
50  2019-10-14 10:45:00  1.18        510
51  2019-10-14 10:30:00  2.31       1619
52  2019-10-11 11:29:00  2.07       1642
53  2019-10-11 11:29:00  3.15       1639
54  2019-10-11 11:29:00  3.75       1611
55  2019-10-11 11:29:00  1.03       1374
56  2019-10-11 11:29:00  4.36       1551
57  2019-10-11 11:29:00  4.77       1588
58  2019-10-11 11:28:00  1.64        151
59  2019-10-11 11:28:00  5.57       1638
60  2019-10-11 11:28:00  4.18       1435
61  2019-10-11 11:28:00  2.98       1538
62  2019-10-11 11:28:00  3.60       1636
63  2019-10-11 11:28:00  1.48         94
64  2019-10-10 10:39:00  3.44       1636
65  2019-10-10 10:39:00  2.50       1570
66  2019-10-10 10:24:00  3.73       1567
67  2019-10-09 11:19:00  3.26        985
68  2019-10-09 11:19:00  3.55        161
69  2019-10-09 11:18:00  4.18       1604
70  2019-10-09 11:18:00  4.30         51
71  2019-10-09 11:18:00  3.87        279
72  2019-10-09 11:18:00  3.22       1577
73  2019-10-09 11:18:00  3.11       1565
74  2019-10-09 11:18:00  2.58       1614
75  2019-10-09 11:18:00  1.96       1613
76  2019-10-08 11:33:00  6.11       1631
77  2019-10-08 11:32:00  4.25       1634
78  2019-10-08 11:32:00  2.20       1635
79  2019-10-08 11:04:00  3.53       1632
80  2019-10-08 11:04:00  2.06       1633
81  2019-10-08 11:04:00  2.61       1614
82  2019-10-08 11:04:00  6.95       1552
83  2019-10-07 11:04:00  2.52       1608
84  2019-10-07 11:04:00  2.54       1619
85  2019-10-07 11:04:00  3.17       1589
86  2019-10-07 11:04:00  2.80       1582
87  2019-10-07 11:04:00  3.83       1607
88  2019-10-07 11:03:00  4.49         12
89  2019-10-07 11:03:00  4.64       1629
90  2019-10-07 11:03:00  6.61       1597
91  2019-10-07 11:03:00  3.87       1630
92  2019-10-07 11:03:00  4.21       1618
93  2019-10-07 11:03:00  4.58       1408
94  2019-10-07 11:03:00  4.89       1595
95  2019-10-07 11:03:00  3.52        954
96  2019-10-04 11:02:00  3.92        935
97  2019-10-04 11:02:00  2.41       1556
98  2019-10-04 11:02:00  3.44       1598
99  2019-10-04 11:02:00  1.49       1561
100 2019-10-04 11:01:00  8.38       1597
101 2019-10-04 11:01:00  4.06       1544
102 2019-10-04 11:01:00  3.52        216
103 2019-10-04 11:01:00  5.96       1623
104 2019-10-04 11:01:00  5.23       1606
105 2019-10-04 10:58:00  4.08       1628
106 2019-10-03 10:51:00  1.84       1603
107 2019-10-03 10:50:00  4.02       1621
108 2019-10-03 10:50:00  3.75       1304
109 2019-10-03 10:39:00  2.67       1495
110 2019-10-03 10:39:00  4.59        519
111 2019-10-03 10:39:00  3.96       1527
112 2019-10-02 11:02:00  2.20        528
113 2019-10-02 11:02:00  2.64       1538
114 2019-10-02 11:02:00  3.60       1625
115 2019-10-02 11:02:00  4.69       1627
116 2019-10-02 11:02:00  2.33       1619
117 2019-10-02 11:02:00  3.79         10
118 2019-10-02 11:02:00  3.46       1555
119 2019-10-02 11:02:00  2.19       1626
120 2019-10-01 10:37:00  1.66       1624
121 2019-10-01 10:37:00  3.93       1341
122 2019-10-01 10:37:00  3.49       1622
123 2019-10-01 10:37:00  2.41       1614
124 2019-10-01 10:37:00  6.56       1535
125 2019-10-01 10:37:00  2.50       1576
126 2019-10-01 10:36:00  4.00       1553
127 2019-09-30 10:56:00  8.94       1091
128 2019-09-30 10:56:00  3.94       1599
129 2019-09-30 10:56:00  3.26       1618
130 2019-09-30 10:56:00  6.08       1552
131 2019-09-30 10:56:00  3.17       1587
132 2019-09-30 10:56:00  7.17       1380
133 2019-09-30 10:56:00  4.35       1551
134 2019-09-30 10:55:00  3.20       1546
135 2019-09-30 10:55:00  4.06         44
136 2019-09-30 10:18:00  2.37       1619
137 2019-09-27 16:05:00  2.50       1619
138 2019-09-27 10:43:00  2.32       1620
139 2019-09-27 10:43:00  2.08       1619
140 2019-09-27 10:43:00  5.89        969
141 2019-09-27 10:43:00  3.03         10
142 2019-09-27 10:43:00  3.12       1579
143 2019-09-27 10:43:00  2.21       1616
144 2019-09-27 10:43:00  1.35        510
145 2019-09-27 10:43:00  2.95       1531
146 2019-09-26 10:32:00  5.95       1552
147 2019-09-26 10:31:00  4.32       1544
148 2019-09-26 10:28:00  4.07        279
149 2019-09-26 10:28:00  3.23       1238
150 2019-09-26 10:28:00  1.80        702
151 2019-09-26 10:28:00  2.72       1615
152 2019-09-26 10:27:00  2.86       1618
153 2019-09-26 10:27:00  4.57       1617
154 2019-09-25 10:47:00  1.31         94
155 2019-09-25 10:47:00  3.12       1582
156 2019-09-25 10:47:00  2.23       1615
157 2019-09-25 10:47:00  5.15        599
158 2019-09-24 11:38:00  3.83       1605
159 2019-09-24 11:37:00  3.92       1586
160 2019-09-24 11:37:00  1.76       1614
161 2019-09-24 11:22:00  3.18       1578
162 2019-09-24 11:08:00  4.54       1562
163 2019-09-24 11:08:00  1.50       1613
164 2019-09-24 11:08:00  3.58       1593
165 2019-09-24 11:07:00  3.71       1611
166 2019-09-24 10:33:00  2.56       1570
167 2019-09-23 11:54:00  3.08       1608
168 2019-09-23 11:50:00  3.34       1607
169 2019-09-23 11:50:00  7.53       1552
170 2019-09-23 11:29:00  5.88       1553
171 2019-09-23 11:28:00  2.07       1568
172 2019-09-23 11:28:00  3.37        216
173 2019-09-23 11:28:00  3.54       1546
174 2019-09-23 11:27:00  4.93       1572
175 2019-09-23 11:27:00  5.18       1609
176 2019-09-23 11:26:00  6.45       1597
177 2019-09-23 11:26:00  3.05       1588
178 2019-09-23 11:26:00  1.61       1610
179 2019-09-23 11:26:00  4.65       1536
180 2019-09-23 11:26:00  3.11        702
181 2019-09-23 11:25:00  2.83       1413
182 2019-09-23 11:25:00  6.08       1612
183 2019-09-20 10:58:00  3.41       1551
184 2019-09-20 10:58:00  5.82       1211
185 2019-09-20 10:57:00  6.06       1304
186 2019-09-20 10:30:00  3.26       1606
187 2019-09-20 10:30:00  2.26       1561
188 2019-09-19 10:36:00  3.95       1544
189 2019-09-19 10:36:00  8.17       1562
190 2019-09-19 10:36:00  1.95       1591
191 2019-09-19 10:36:00  1.68       1603
192 2019-09-19 10:26:00  3.09       1604
193 2019-09-19 10:26:00  2.00         15
194 2019-09-19 10:26:00  3.10        410
195 2019-09-19 10:26:00  3.86       1091
196 2019-09-19 10:26:00  6.24       1552
197 2019-09-19 10:26:00  3.92       1546
198 2019-09-19 10:26:00  3.20       1569
199 2019-09-18 10:52:00  4.88       1554
200 2019-09-18 10:25:00  3.22       1418
201 2019-09-18 10:25:00  2.01        473
202 2019-09-18 10:25:00  3.23       1602
203 2019-09-18 10:25:00  3.90       1202
204 2019-09-17 11:14:00  5.70       1597
205 2019-09-17 11:10:00  4.07       1211
206 2019-09-17 11:10:00  2.38       1575
207 2019-09-17 11:09:00  7.88       1552
208 2019-09-17 11:09:00  3.54        274
209 2019-09-17 11:09:00  3.44       1046
210 2019-09-17 11:08:00  3.68       1567
211 2019-09-17 10:56:00  3.15       1566
212 2019-09-17 10:56:00  4.68       1600
213 2019-09-17 10:56:00  3.51       1601
214 2019-09-17 10:55:00  1.55         94
215 2019-09-17 10:55:00  1.92         44
216 2019-09-16 10:21:00  2.61       1519
217 2019-09-16 10:21:00  4.07       1596
218 2019-09-16 10:21:00  5.16        268
219 2019-09-16 10:21:00  3.52       1598
220 2019-09-16 10:16:00  1.50        702
221 2019-09-16 10:16:00  8.65       1552
222 2019-09-16 10:16:00  6.01       1571
223 2019-09-16 10:16:00  3.97       1527
224 2019-09-16 10:16:00  5.37       1551
225 2019-09-16 10:16:00  3.36       1599
226 2019-09-16 10:16:00  1.90        409
227 2019-09-16 10:16:00  5.00       1595
228 2019-09-13 22:27:00  5.52        510
229 2019-09-13 11:15:00  2.85       1575
230 2019-09-13 10:43:00  3.48        268
231 2019-09-13 10:43:00  2.68       1558
232 2019-09-13 10:43:00  4.46        519
233 2019-09-13 10:43:00  4.67       1478
234 2019-09-13 10:43:00  8.55         10
235 2019-09-13 10:42:00  1.87       1594
236 2019-09-13 10:42:00  5.01       1593
237 2019-09-12 10:37:00  3.68       1533
238 2019-09-12 10:17:00  3.51        279
239 2019-09-12 10:17:00  3.63       1414
240 2019-09-12 10:17:00  2.46       1540
241 2019-09-11 10:45:00  7.23        664
242 2019-09-11 10:34:00  1.76       1543
243 2019-09-11 10:34:00  5.98       1553
244 2019-09-11 10:33:00  2.44       1551
245 2019-09-11 10:33:00  3.27        232
246 2019-09-10 11:41:00  7.56       1552
247 2019-09-10 11:15:00  4.44         30
248 2019-09-10 11:14:00  2.67       1538
249 2019-09-10 11:14:00  2.68       1589
250 2019-09-10 11:14:00  3.46       1408
251 2019-09-10 11:14:00  3.02       1590
252 2019-09-10 11:14:00  3.85       1567
253 2019-09-10 11:14:00  3.56       1501
254 2019-09-10 11:01:00  4.70       1549
255 2019-09-10 11:01:00  1.69       1591
256 2019-09-10 11:01:00  2.79       1361
257 2019-09-10 11:01:00  4.26       1575
258 2019-09-09 11:28:00  4.98       1586
259 2019-09-09 11:28:00  3.97       1547
260 2019-09-09 11:28:00  3.66       1588
261 2019-09-09 10:28:00  2.60       1564
262 2019-09-09 10:20:00  2.77        510
263 2019-09-06 10:15:00  4.14       1585
264 2019-09-06 10:15:00  3.02       1581
265 2019-09-06 10:15:00  4.50       1544
266 2019-09-06 10:13:00  2.04          3
267 2019-09-06 10:05:00  9.40       1537
268 2019-09-06 10:05:00  3.02       1584
269 2019-09-06 10:05:00  3.44       1583
270 2019-09-06 10:05:00  3.18       1582
271 2019-09-06 10:02:00  4.46         76
272 2019-09-06 10:00:00  4.68       1534
273 2019-09-04 10:26:00  4.81       1580
274 2019-09-04 10:26:00  2.86       1418
275 2019-09-04 10:26:00  3.61       1575
276 2019-09-04 10:26:00  4.22       1502
277 2019-09-04 10:26:00  4.15        766
278 2019-09-04 10:25:00  6.05        599
279 2019-09-04 10:25:00  3.49       1304
280 2019-09-03 10:49:00  1.85       1543
281 2019-09-03 10:29:00  9.04       1580
282 2019-09-03 10:29:00  3.65       1495
283 2019-09-03 10:29:00  1.89       1497
284 2019-09-03 10:29:00  1.43       1539
285 2019-09-03 10:29:00  3.54         33
286 2019-09-03 10:29:00  2.69       1540
287 2019-09-03 10:29:00  5.38       1399
288 2019-09-03 10:29:00  3.78       1435
289 2019-09-03 10:19:00  3.69       1581
290 2019-09-03 10:19:00  4.25       1541
291 2019-09-03 10:19:00  3.13       1579
292 2019-09-02 11:50:00  5.58        216
293 2019-09-02 10:26:00  5.20       1435
294 2019-09-02 10:26:00  7.68       1361
295 2019-09-02 10:26:00  3.25       1551
296 2019-09-02 10:26:00  2.39       1464
297 2019-09-02 10:26:00  3.18       1575
298 2019-09-02 10:26:00  4.39       1567
299 2019-09-02 10:26:00  7.27       1555
300 2019-09-02 10:26:00  8.91       1380
301 2019-08-30 11:15:00  3.34       1538

I would like to create two new columns in the first data frame. The first one should contain the value of the first test performed within the time frame (test date between the admission date and the discharge date) of the given hospitalization with a matching patient ID. The second should contain the date of the test, the value of which is included in the first column.

The number of tests performed during one hospitalization varies and ranges from one to a dozen or so. It also happens that one patient has many hospitalizations listed in the first data frame and many tests within each of them.

So far, I have experimented with converting both frames into lists in which the individual first-order elements correspond to the patient's ID, while the second-order elements correspond to the remaining data contained in the data frame. However, I have no idea how to match and properly filter the test values ​​from the lists resulting from the second frame with data from the list resulting from the first frame.

I would appreciate any tip on how I could solve this problem.

EDIT: Sample data str:

df1:

'data.frame':   101 obs. of  3 variables:
 $ Admission_date: POSIXct, format: "2019-04-17 17:00:00" "2019-04-16 23:55:00" "2019-04-16 18:25:00" "2019-04-16 13:00:00" ...
 $ Discharge_date: POSIXct, format: "2019-06-03 11:10:00" "2019-05-15 15:31:00" "2019-05-07 13:00:00" "2019-04-26 13:00:00" ...
 $ Patient_ID    : int  1571 1572 1544 1573 1574 1575 310 1576 1577 1249 ...
 - attr(*, "na.action")= 'omit' Named int [1:44] 27 218 286 413 417 769 855 1120 1242 1897 ...
  ..- attr(*, "names")= chr [1:44] "27" "218" "286" "413" ...

df2:

'data.frame':   301 obs. of  3 variables:
 $ Test_date : POSIXct, format: "2019-10-21 11:39:00" "2019-10-21 11:39:00" "2019-10-21 11:05:00" "2019-10-21 11:05:00" ...
 $ Value     : num  2.23 5.25 4.63 4.65 3.37 2.06 7.24 2.44 4.27 6.15 ...
 $ Patient_ID: int  1306 1280 1272 1230 1257 1328 1265 1301 1298 127 ...
 - attr(*, "na.action")= 'omit' Named int [1:139] 10 20 61 125 131 187 223 254 293 298 ...
  ..- attr(*, "names")= chr [1:139] "10" "20" "61" "125" ...


from Recent Questions - Stack Overflow https://ift.tt/3qPuoRr
https://ift.tt/eA8V8J