2022-06-30

GORM .Save don't save "has one" relation to the database

I have struct:


type Book struct {
    gorm.Model
    Title       string      `json:"title"`
    Author      string      `json:"author"`
    Description string      `json:"description"`
    Category    string      `json:"Category"`
    Publisher   string      `json:"publisher"`
    AuthorsCard AuthorsCard `gorm:"foreignKey:BookID" json:"authorscard"`
}

type AuthorsCard struct {
    //gorm.Model   // I purposely Don't want to use gorm.model  
    ID          uint `gorm:"primarykey"`
    BookID      uint
    Name        string `json:"name"`
    Age         int    `json:"age"`
    YearOfBirth int    `json:"year"`
    Biography   string `json:"biography"`
}

And I'm trying to make update function:

// controller.go
func UpdateBook(ctx *gin.Context) {
    enableCors(&ctx.Writer)
    id := ctx.Param("ID")
    var updateBook = &models.Book{}
    if err := ctx.BindJSON(updateBook); err != nil {
        ctx.AbortWithStatus(http.StatusBadRequest)
        log.Fatal(err)
    } else {
        repository.UpdateBook(updateBook, id)
        ctx.JSON(http.StatusOK, updateBook)
        log.Println(updateBook)
    }
}
//repository.go
func UpdateBook(updateBook *models.Book, ID string) {
    book, db := GetBookById(ID)
    if updateBook.Title != "" {
        book.Title = updateBook.Title
    }

    if updateBook.Author != "" {
        book.Author = updateBook.Author
    }

    if updateBook.Publisher != "" {
        book.Publisher = updateBook.Publisher
    }
    if updateBook.Description != "" {
        book.Description = updateBook.Description
    }
    if updateBook.Category != "" {
        book.Category = updateBook.Category
    }

    if updateBook.AuthorsCard.Name != "" {
        book.AuthorsCard.Name = updateBook.AuthorsCard.Name
    }
    if updateBook.AuthorsCard.Age != 0 {
        book.AuthorsCard.Age = updateBook.AuthorsCard.Age
    }
    if updateBook.AuthorsCard.YearOfBirth != 0 {
        book.AuthorsCard.YearOfBirth = updateBook.AuthorsCard.YearOfBirth
    }
    if updateBook.AuthorsCard.Biography != "" {
        book.AuthorsCard.Biography = updateBook.AuthorsCard.Biography
    }

    db.Save(&book)
    // same with db.Preload("AuthorsCard").Save(&book)
}

The issue is: When I make an PUT request, I receive Fully updated data. And when I'm trying to make GET request all my fields, except related AuthorsCard, are been updated.

PUT response: 200 Code

{
    "ID": 0,
    "CreatedAt": "0001-01-01T00:00:00Z",
    "UpdatedAt": "0001-01-01T00:00:00Z",
    "DeletedAt": null,
    "title": "Test",
    "author": "author",
    "description": "something",
    "Category": "Category",
    "publisher": "PB",
    "authorscard": {
        "ID": 0,
        "BookID": 0,
        "name": "Updated",
        "age": 22,
        "year": 1999,
        "biography": "biography Updated"
    }
}

Get response after that:

[
    {
        "ID": 1,
        "CreatedAt": "2022-06-29T14:57:37.489639+03:00",
        "UpdatedAt": "2022-06-29T15:50:11.578724+03:00",
        "DeletedAt": null,
        "title": "Test",
        "author": "author",
        "description": "something",
        "Category": "Category",
        "publisher": "PB",
        "authorscard": {
            "ID": 1,
            "BookID": 1,
            "name": "test",
            "age": 23,
            "year": 1999,
            "biography": "23fdgsdddTEST"
        }
    }
]

As you can see "authorscard" hasn't changed. Please, can someone tell me what I'm doing wrong?



chokidar works in VS code debugger, but not able to load when Electron app is built

I am using chokidar to monitor file changes in directory. The following works if the app is started in VS code debugger.

const chokidar = require('chokidar');

But, after building the app, and start the app by clicking the app icon, it has following error in a popup dialog

Uncaught Exception:
Error: Cannot find module 'chokidar'

This is Electron and Electron-builder version:

electron-builder  version=22.13.1 os=20.2.0
writing effective config  file=dist/builder-effective-config.yaml
packaging       platform=darwin arch=x64 electron=11.5.0 appOutDir=dist/mac

How to solve it?



AWS Pinpoint SDK - How to import a segment and create campaign

I am working on a pinpoint integration that will hopefully use the java sdk to create a segment and launch a campaign in one shot. I am having trouble determining how to do this. If I use ImportJobRequest to import a CSV/JSON file from S3 and use the segment id when creating the campaign then I get an error Segment specified in Segmented is not found

I have verified that the segment is created with the id I am attempting to use.

This may be the same as this unanswered question: Python: Passing a variable to another function

Is there a way to import a segment and create a campaign at once is a single lambda function? Polling for the imported segment in a separate function seems like bad design.



How to do a SUMX on a GroupBy table in Webi?

I have a DAX code in PowerBI that displays how many "check-ups" have been done in a particular location. I am currently trying to transfer this over to SAP Web Intelligence Rich Client.

1.2_FrequenciedDay- = 
var minDate=MIN(Datekey[ModifiedDate])
var maxDate=MAX(Datekey[ModifiedDate])

var summarizedTable=SUMMARIZE('x','x'[Datekey],"count",distinctcount('x'[DivisionKey]))
var Cumulativesum=SUMX(FILTER(summarizedTable,[Datekey]>=minDate&&[Datekey]<=maxDate),[count])
return Cumulativesum

Is there a SUMX and SUMMARIZE translations for Web Intelligence?



Achieve Hexdump in python

How can I achieve the following in Python 3?

cat file | sigtool –hex-dump | head -c 2048

I’m attempting to read a .ndb database which I have created in order to check for malicious PHP files, however I need to be able to create hex signatures of files in python in order to check against this database.



How can I fix the Content Security Policy error to make 3rd party API requests in my Stripe App UI extension?

When making a http request to "https://example.com/api" - the console throws an error

Refused to connect to...because it violates the following Content Security Policy directive: "connect-src http://localhost:*...

enter image description here

How can I resolve this error?



Unit test Apache Camel specific routes by filtering (Model#setRouteFilter)

How to include only certain routes in my unit test. For example, how do I enable only my-translation-route.

public class TestRoute extends RouteBuilder {
    @Override
    public void configure() {
        from("ftp://my-ftp-server:21/messages")
                .routeId("my-inbound-route")
                .to("direct:my-translation-route");

        from("direct:my-translation-route")
                .routeId("my-translation-route")
                .bean(MyBean.class)
                .to("direct:my-outbound-route");

        from ("direct:my-outbound-route")
                .routeId("my-translation-route")
                .to("http://my-http-server:8080/messages");
    }
}

I tried with Model#filterRoutes but this did not work. All routes were loaded.

class TestRouteTest extends CamelTestSupport {
    @Override
    protected RoutesBuilder createRouteBuilder() {
        return new TestRoute();
    }

    @Override
    public boolean isUseAdviceWith() {
        return true;
    }

    @Test
    void testIfItWorks() throws Exception {
        context.setRouteFilterPattern("my-translation-route", null);

        AdviceWith.adviceWith(context, "my-translation-route", a -> {
            a.mockEndpointsAndSkip("direct:my-outbound-route");
        });

        context.start();

        getMockEndpoint("mock:direct:my-outbound-route").expectedBodyReceived().expression(constant("Hahaha! 42"));

        template.sendBodyAndHeaders("direct:my-translation-route", "42", null);

        assertMockEndpointsSatisfied();
    }
}

I got it working with the override of CamelTestSupport#getRouteFilterIncludePattern, e.g.:

@Override
public String getRouteFilterIncludePattern() {
    return "direct:my-translation-route";
}

But then this is set for all tests in this test class.



2022-06-29

Can use public ip adress to connect to webserver but only when connected to the same internet

Introduction

I have a simple node.js WebServer which just contains a simple express example. It hosts at my private ip adress 192.168.1.155 at port 80. I can use that adress to connect to the server and everything works fine.
Then I went to my router to add port forwarding and I can now use my public ip adress to connect to the server (too).

Problem

But the connection with the public ip adress only works when I am connected to the same internet. I tried to connect with the public ip with my iPhone with mobile data instead of the same wifi and it didn't work.

Solution?

Has anyone discovered the same problem and is there a solution to it?

Edit...

Thanks to @robertklep who mentioned about CGN or CGNAT (Carrier Grade Network Address Translation) I visited https://www.remoterig.com/wp/?page_id=3494 and I found out this: "CGN makes communication in the other direction, from outside Internet and in to LAN impossible"



variable background colour for gnuplot

I'd like to colour the background of my timeseries (a few 10^5 s, so several days) plot depending on the time of day. White for the day regions, a, say, dark blue for the night, and perhaps some transition inbetween.

Continously changing colour would be nice, but a more "blocky" design is also fine. The dataset has a resolution of ten seconds (and many columns), so the plot is already a bit slow.

I'll figure out the math myself (length of day changes over the year), for the moment a sinus (or 0/1 rectangle) with frequency of 1 d is fine, but how do I plot it so the ordinate becomes a background colour instead of vertical screen coordinate?

And, of course there is a hitch, I want everything, the actual graph and background colour, in one plot command so I can still zoom. This

set multi
gamma = 2.2; color(gray) = gray**(1./gamma)
set palette model RGB functions color(gray), color(gray), color(gray)
set pm3d map; splot [0:20] sin(x); unset pm3d

plot ....

looks super nice, after you adjust the margins etc., and is about what I'm dreaming of. I'd be happy with a black box per night, too.



crabbly / Print.js Print not working, Print Preview not showing

We are using 1.6 of crabbly/Print.js https://github.com/crabbly?tab=repositories https://printjs.crabbly.com/

It works just fine until this setting in edge is turned on. This is our code.

                const bytes = convertBase64ToUint8Array(json.contentBase64)
                const blob = new Blob([bytes], { type: json.contentType });
                const url = window.URL.createObjectURL(blob);
                showMessage.success('Sucessfully downloaded the report as pdf', url)
                print({ printable: url, type: 'pdf', showModal: true })

enter image description here

This is the error we are getting. I cannot reproduce this in development. enter image description here



How to Fix CORS error with ngrok in SpringBoot

Introduction

Tried to access spring boot server using ngrok url but getting CORS error. the allow origin has already the url from which i am trying to access the apis. but still no good.

Any suggestion is appreciated.

Spring Boot Project -

POM

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.1</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>crostest003</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>crostest003</name>
    <description>crostest003</description>
    <properties>
        <java.version>11</java.version>
        <kotlin.version>1.6.21</kotlin.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-webflux</artifactId>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.module</groupId>
            <artifactId>jackson-module-kotlin</artifactId>
        </dependency>
        <dependency>
            <groupId>io.projectreactor.kotlin</groupId>
            <artifactId>reactor-kotlin-extensions</artifactId>
        </dependency>
        <dependency>
            <groupId>org.jetbrains.kotlin</groupId>
            <artifactId>kotlin-reflect</artifactId>
        </dependency>
        <dependency>
            <groupId>org.jetbrains.kotlin</groupId>
            <artifactId>kotlin-stdlib-jdk8</artifactId>
        </dependency>
        <dependency>
            <groupId>org.jetbrains.kotlinx</groupId>
            <artifactId>kotlinx-coroutines-reactor</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>io.projectreactor</groupId>
            <artifactId>reactor-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
        <testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.jetbrains.kotlin</groupId>
                <artifactId>kotlin-maven-plugin</artifactId>
                <configuration>
                    <args>
                        <arg>-Xjsr305=strict</arg>
                    </args>
                    <compilerPlugins>
                        <plugin>spring</plugin>
                    </compilerPlugins>
                </configuration>
                <dependencies>
                    <dependency>
                        <groupId>org.jetbrains.kotlin</groupId>
                        <artifactId>kotlin-maven-allopen</artifactId>
                        <version>${kotlin.version}</version>
                    </dependency>
                </dependencies>
            </plugin>
        </plugins>
    </build>

</project>

application.properties

server.port=9005
#security
spring.security.user.name=service
spring.security.user.password=password

Controller

@RestController
@RequestMapping("/")
class ControllerOne {

    @GetMapping
    fun getOne(): GenericResponse {
        return GenericResponse(message = UUID.randomUUID().toString())
    }
}

data class GenericResponse(
    var message: String = "",
) : Serializable

AppSecurityConfig

import org.springframework.context.annotation.Bean
import org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity
import org.springframework.security.config.web.server.ServerHttpSecurity
import org.springframework.security.web.server.SecurityWebFilterChain
import org.springframework.web.cors.CorsConfiguration

@EnableWebFluxSecurity
class AppSecurityConfig {

    @Bean
    fun securityWebFilterChain(http: ServerHttpSecurity): SecurityWebFilterChain {
        return http
            .cors()
            .configurationSource { _ ->
                val cors = CorsConfiguration()
                cors.allowedOrigins =
                    listOf("https://stackoverflow.com")
                cors.allowedMethods = listOf("*")
                cors.allowedHeaders = listOf("*")
                cors.allowCredentials = true
                cors
            }
            .and()
            .csrf().disable()
            .build()
    }
}

Made a ngrok url -

ngrok http --host-header=rewrite 9005

Now i want to test this from chrome - I went to stackoverflow inspect and made a fetch request.

fetch('ngrokUrl',{headers:{'Access-Control-Allowed-Origin':'https://stackoverflow.com'}}).then(console.log);

Error i got -

enter image description here

How do i fix this ?

The project a very basic and i barely made any changes but still there is a cors issue.

Took reference from - baeldung

Demo Project Link - Github

Update

Tried changing the ngrok version to past (2.3.40) version. but still no good.

Strangely only POST request are working. but GET requst is giving a cors error.

Major Update >>>> 29-6-2022

Everything working in widows 10 and this error is only there in windows 11.



How to create nested training and testing sets?

I'm working with the ChickWeight data set in R. I'm looking to create multiple models, each trained for an individual chick. As such, I am nesting the data so that a dataframe is created for each individual chick and stored within the list column.

Here is the start:

library(tidyverse)
library(datasets)
data("ChickWeight")

ChickWeightNest <- ChickWeight %>% 
  group_by(Chick) %>% 
  nest()

From here, training a linear regression model on all dataframes simultaneously is very easy by simply building the model as a function then mutating a new column and mapping. However, building a more sophisticated model (e.g. xgboost) requires first splitting the data into testing and training sets. How can I split my all nested data frames at once to create training and testing sets so that I can train multiple models simultaneously?

As a side note, info on training/tuning multiple models seems to be relatively sparse in my research, any related resources or past stack questions would be very appreciated.



Can't load components using VueJS

I'm trying to use dynamic import but nothing is happening

this is my code :

<component :is="state.name"/>
<button @click="onClick">Click me !</button>

setup() {
  const state = reactive({
  name: computed(() => isShow.value == true ? import('./b.vue'): import('./a.vue'))
  });

  const isShow = ref(false);
  const onClick = () => {
    console.log(5)
    isShow.value = true;
  }

  return {
    state,
    onClick
  }
},

and here is my b & a components

b
<script>
export default {
    data(){},
    created(){
        alert('lol')
    }
}
</script> 

Please I need a help to load my components programatically



2022-06-28

PHP str_ireplace same word with accent or not

Hope this is a better...

I set "mysqli_set_charset($conn,"utf8");" before doing a standard query SELECT-FROM-WHERE. This query performs an accent insentive compare.

str_ireplace(search, replace, text) search does an accent sensitive compare. I would need search to do an accents insentive compare.

I want to highlight the word "Français". I replace "Français" by

<mark>Français</mark>

but at the same time I want to replace "Francais" by

<mark>Francais</mark>

older post:

I use a simple way to highlight some text:

$markReplace = "<mark>" . $wordToSearch . "</mark>";
$fullText = str_ireplace($wordToSearch, $markReplace, $fullText);
echo $fullText;

It works fine, the problem is that sometimes the same $wordToSearch can have a accent or not. For example "huître-huitre", "Francais-Français", "echo-écho" because of typo errors. And contrary to MySql, str_ireplace doesn't detect a letter with an accent as the same letter without the accent.

$unwanted_array = array('Š'=>'S', 'š'=>'s', 'Ž'=>'Z', 'ž'=>'z', 'À'=>'A', 'Á'=>'A', 'Â'=>'A', 'Ã'=>'A', 'Ä'=>'A', 'Å'=>'A', 'Æ'=>'A', 'Ç'=>'C', 'È'=>'E', 'É'=>'E',
                        'Ê'=>'E', 'Ë'=>'E', 'Ì'=>'I', 'Í'=>'I', 'Î'=>'I', 'Ï'=>'I', 'Ñ'=>'N', 'Ò'=>'O', 'Ó'=>'O', 'Ô'=>'O', 'Õ'=>'O', 'Ö'=>'O', 'Ø'=>'O', 'Ù'=>'U',
                        'Ú'=>'U', 'Û'=>'U', 'Ü'=>'U', 'Ý'=>'Y', 'Þ'=>'B', 'ß'=>'Ss', 'à'=>'a', 'á'=>'a', 'â'=>'a', 'ã'=>'a', 'ä'=>'a', 'å'=>'a', 'æ'=>'a', 'ç'=>'c',
                        'è'=>'e', 'é'=>'e', 'ê'=>'e', 'ë'=>'e', 'ì'=>'i', 'í'=>'i', 'î'=>'i', 'ï'=>'i', 'ð'=>'o', 'ñ'=>'n', 'ò'=>'o', 'ó'=>'o', 'ô'=>'o', 'õ'=>'o',
                        'ö'=>'o', 'ø'=>'o', 'ù'=>'u', 'ú'=>'u', 'û'=>'u', 'ý'=>'y', 'þ'=>'b', 'ÿ'=>'y' );
$str = strtr( $str, $unwanted_array );

A solution that would use something like this doesn't work because it will change all the accents in $fullText. I need to keep the original words when I echo $fullText.

Can't figure out the solution.

Thanks... Andy



Jackson-databind deserializes JSON with missing property as "null" instead of default value

When deserializing a JSON string, missing properties are being set as "null" when they shouldn't be. Below is a POJO class:

    @Builder
    @Getter
    @Setter
    @JsonInclude(JsonInclude.Include.NON_NULL)
    @NoArgsConstructor
    @AllArgsConstructor
    @ToString
    @EqualsAndHashCode
    static class MyPojo {
        
        @JsonProperty(value = "OptionalProp", required = false, defaultValue = "")
        @Builder.Default
        @Nullable
        @JsonSetter(value = "", nulls = Nulls.AS_EMPTY)
        private String optionalProp = "";
    
        @JsonProperty(value = "RequiredProp", required = false, defaultValue = "")
        @Builder.Default
        @Nullable
        @JsonSetter(value = "", nulls = Nulls.AS_EMPTY)
        private String requiredProp = "";
    }

JSON String to deserialize:

{
  "RequiredProp" : "test"
}

Here is the deserialization:

private final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
private final myPojo = OBJECT_MAPPER.readValue(inputStream, MyPojo.class);

And here is the output:

MyPojo(optionalProp=null, requiredProp=test)

BUT creating the POJO with builder:

        final MyPojo myPojo = MyPojo.builder()
            .requiredProp("test")
            .build();

Results in the following POJO:

MyPojo(optionalProp=, requiredProp=test)

I'm using:

Jackson-databind 2.12.x
Jackson-annotation 2.12.x
Jackson-core 2.12.x

Is there a minor version change from one of these packages that changes the behavior?



Why doesn't OnPlayerEnteredRoom() method work?

i make room in unity with photon pun. When i join the room, OnPlayerEnteredRoom() doesn't get call. MonoBehaviourPunCallbacks and IInRoomCalbacks already inherited. Code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using TMPro;
using UnityEngine.UI;
using Photon.Pun;
using Photon.Realtime;
public class RoomServer : MonoBehaviourPunCallbacks, IInRoomCallbacks
{
public TMP_Text p1, p2, p3, p4;
public Sprite p1i, p2i, p3i, p4i;
public Sprite p1is, p2is, p3is, p4is;
public GameObject playerPrefab;
// Start is called before the first frame update
void Start()
{
    PhotonNetwork.AutomaticallySyncScene = true;
    Debug.Log("ao");
}

// Update is called once per frame
void Update()
{
    
}
public void OnPlayerEnteredRoom() {
    PhotonNetwork.Instantiate(playerPrefab.name, new Vector3(0,0,0), Quaternion.identity);
    Debug.Log("Player has joined!");
}
public void OnPlayerLeftRoom(Player newPlayer) {

}
public void OnRoomPropertiesUpdate(Hashtable hashtable) {
    
}
public void OnMasterClientSwitched(Player newPlayer) {

}
public void OnPlayerPropertiesUpdate(Player newPlayer, Hashtable hashtable) {

}

}



How to set parameters before simuliting to make it more consist to the real road net? (in SUMO)

In order to make the simulation more consist to the real road, we should reset some traffic parameters in SUMO,such as max vehical speed, Max acceleration, minGAP e.t. Which parameters need to be set in SUMO? And how?



How to display data order by time not in the same bloc with laravel?

I'm working on reactJs and Laravel chat. Actually the problem is that when I try to display data for specifc two users (sender and reciever) I got them bloc by bloc not message by message . I got a bloc of messages for the sender and then a bloc of messages for the reciever this is the code :


  function getChats($id){
    $c= chat::all()->where('id', $id);

    //$types= chat::all()->where('id', $id);
 
    foreach ($c as $q) {
        $q=chat::where('senderId', $q->userId)
        ->where('userId', $q->senderId)->get();  
        foreach ($q as $a) {
        $a=chat::where('userId', $a->senderId)
        ->where('senderId', $a->userId)->get();  
            
         $finalResult = $a->merge($q);
    
        
         // return ['chat' => $s];
        }
    }
  
    
  
     
  
  return ['chat' => $finalResult];

  }

I merge the two queries to get the mssges of the sender (senderId) and the reciever(userId) thanks in advance for you help



2022-06-27

selenium / seleniumwire unknown error: cannot determine loading status from unknown error: unexpected command response

Here's the error:

selenium.common.exceptions.WebDriverException: Message: unknown error: cannot determine loading status
from unknown error: unexpected command response
  (Session info: chrome=103.0.5060.53)

I'm using proper webdriver and chrome version:

Here is the script, its job is too open a webpage from the normal suer data directory and provide a response.

from seleniumwire import webdriver  # Import from seleniumwire


chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("user-data-dir=C:\\selenium") 

driver = webdriver.Chrome(chrome_options=chrome_options)

driver.get('https://katalon.com/
')


for request in driver.requests:
    if request.response:
        print(
            
            request.response.status_code,
            
        )


Invalid ARN error while creating S3 Bucket Policy using Policy generator

Im trying to create Amazon S3 Bucket Policy using the Policy Generator Though this is very basic, but not sure why Im getting "Resource field is not valid. You must enter a valid ARN." for any ARN, eg for this "arn:aws:s3:::s3-demo-bucket-2022" I have tried with multiple s3 bucket, aws accounts, all giving same problem. Any help/suggestion?



Python virtual environment does not have a scripts folder and cannot be activate

I am new at programming and I want to work with Python and meanwhile with the Django Framework.

For installing a venv I use python3 -m venv ./venv/drf and insert this in a terminal in VS code. I found this on my research about my problem. But when I installed the venv there is not a script folder in the venv folder.

For activating my venv I tried source .venv/drf/bin/activate but this is neither working. I am working on a mac and I installed Python previously. I even installed Django on VS code. What can I do to implement the scripts folder and activate my venv?



pandas vectorized lookup without depreciated loockup()

My problem concerns lookup(), which is to be depreciated. So I'm looking for an alternative. Documentation suggests using loc() (which does not seem to work with a vectorized approach) or melt() (which seems quite convoluted). Furthermore, the documentation suggests factorize() which (I think) does not work for my setup.

Here is the problem: I have a 2-column DataFrame with x,y-values.

k = 20
y = random.choices(range(1,4),k=k)
x = random.choices(range(1,7),k=k)
tuples = list(zip(x,y))
df = pd.DataFrame(tuples, columns=["x", "y"])
df

And I have several DataFrames in crosstab-format of df. For example one called Cij:

Concordance table (Cij):
x     1     2     3    4     5     6  RTotal
y                                           
1   16     15    13  NaN     5   NaN     108
2   NaN    12   NaN   15   NaN   NaN      87
3   NaN   NaN     6  NaN    13    14     121

I now want to perform a vectorized lookup in Cij from xy-pairs in df to generate a new column CrC in df. Which so far looked like this (plain and simple):

df["Crc"] = Cij.lookup(df["y"],df["x"])

How can I achieve the same thing without lookup()? Or did I just not understand the suggested alternatives?

Thanks in advance!

Addendum: Working code example as requested.

data = [[1,1],[1,1],[1,2],[1,2],[1,2],[1,3],[1,3],[1,5],[2,2],[2,4],[2,4],[2,4],[2,4],[2,4],[3,3],[3,3],[3,5],[3,5],[3,5],[3,6],[3,6],[3,6],[3,6],[3,6]]
df = pd.DataFrame(data, columns=["y", "x"])

# crosstab of df
ct_a = pd.crosstab(df["y"], df["x"])
Cij = pd.DataFrame([], index=ct_a.index, columns=ct_a.columns) #one of several dfs in ct_a layout

#row-wise, than column-wise filling of Cij
for i in range(ct_a.shape[0]):           
  for j in range(ct_a.shape[1]):
    if ct_a.iloc[i,j] != 0:
      Cij.iloc[i,j]= ct_a.iloc[i+1:,j+1:].sum().sum()+ct_a.iloc[:i,:j].sum().sum()

#vectorized lookup, to be substituted with future-proof method
df["Crc"] = Cij.lookup(df["y"],df["x"])

Also, loop-based "filling" of Cij is fine, since crosstabs of df are always small. However, df itself can be very large so vectorized lookup is a necessity.



How to batch the app events with max of 1000 events per second when using Splunk HTTP Event Collector (HEC)

I need to send the batched events to Splunk HTTP Event Collector, say 1000 events per second.

Below is the example of 5 log events that are sent to Splunk HEC -

% curl "https://splunk-example.com:8088/services/collector/raw?channel=093DCD-BC98-8UET-8AFE-8413C3825C4C&sourcetype=test_type&index=test_index"
-H "Authorization: Splunk ******-****-****-****-*********" -d '<log line 1>   <log line 2>  <log line 3>  <log line 4>

  Output:  {"text":"Success","code":0}%

So how do I configure to send the application logs as 1000 events/seconds to Splunk HEC?



How to compare float in logstash?

It's the pipeline config for test:

input {
  file {
    path => "/tmp/test1.log"
  }
}
filter {
  json {
    source => "message"
  }
}

output {
  if [a] == 1.1 {
    stdout {}
  }
}

I echo some test log to logfile:

echo '{"a": 1.1,"b": "test"}' >> /tmp/test1.log

But there isn't any output in console, and I try use the condition if [a] == "1.1" either not work.

Somebody know how to compare float?

thanks!



Iterate through fields using array Django

I have this model:

class Some_model(models.Model):
    field_1 = models.CharField(max_length=200, blank=True, null=True)
    field_2 = models.CharField(max_length=200, blank=True, null=True)

and this function:

# create a function to upload a model object given a json object
def upload_object_values(model, json_values, keys=None):
    if json_values:

        # assign a value to each key which is a the field in the given model
        for key, value in json_values.items():
            setattr(model, key, value)
        
        # if the object has keys to check
        if keys:
            # check if exists using the keys

when called like this:

upload_object_values(Some_model(), {'field_1': 'val', 'field_2': 'val_2'}, ['field_2'])

It would do a get or create inside the upload_object_values function using the fields inside the keys parameter (e.g.: field_2 as the parameter).

Some_model.objects.get_or_create(field_2='val_2')

UPDATE: enter image description here



Django 4.x - Conditional QuerySet for Pagination and a many-to-many relationship

Disclaimer: I have searched and a question tackling this particular challenge could not be found at the time of posting.

The Requirement

For a Class Based View I need to implement Pagination for a QuerySet derived through a many to many relationship. Here's the requirement with a more concrete description:

  • Many Library Records can belong to many Collections
  • Web pages are required for most (but not necessarily all) Collections, and so I need to build views/templates/urls based on what the client identifies as required
  • Each Collection Page displaying the relevant Library Records requires Pagination, as there may be 100's of records to display.

The First Approach

And so with this requirement in mind I approached this as I normally would when building a CBV with Pagination. However, this approach did not allow me to meet the requirement. What I quickly discovered was that the Pagination method in the CBV was building the object based on the declared model, but the many to many relationship was not working for me.

I explored the use of object in the template, but after a number of attempts I was getting nowhere. I need to display Library Record objects but the many to many relationship demands that I do so after determining the records based on the Collection they belong to.

EDIT - Addition of model

models.py

class CollectionOrder(models.Model):
    collection = models.ForeignKey(
        Collection,
        related_name='collection_in_collection_order',
        on_delete=models.PROTECT,
        null=True,
        blank=True,
        verbose_name='Collection'
    )
    record = models.ForeignKey(
        LibraryRecord,
        related_name='record_in_collection_order',
        on_delete=models.PROTECT,
        null=True,
        blank=True,
        verbose_name='Library record',
    )
    order_number = models.PositiveIntegerField(
        blank=True,
        null=True,
    )


2022-06-26

Pass a value in document.write

I have a <script> that creates a tagId value.

Currently, it will display as document.write(tagId);

I want to place tagId in a <form> as:

<input type="text" name="" value="tagId">

Example, something like:

document.write(< input type="text" name="" value="tagId" >);

How can I place the tagID value to insert in the text form?



Create a view on small screens when clicking on a link

Currently, I have a table and page built using Bootstrap. Here is the code:

<div class="row my-2">
  <div class="col-12 col-md-8">
    <div class="d-md-flex justify-content-between">
      <div>
      <h1 class="text-uppercase">Clients</h1>
    </div>
    <div>
      <input class="form-control-sm bg-dark text-light border-dark me-4" type="text" id="criteria" placeholder="Rechercher" onkeyup="showCustomers();">
    </div>
  </div>
</div>
  <div class="row">
    <div class="col-12 col-md-8">
      <table class="table table-dark table-borderless table-lines">
        <thead>
          <tr>
            <th scope="col">Login</th>
            <th scope="col">Num. Client</th>
            <th scope="col">Date inscription</th>
            <th scope="col">Dernier accès</th>
            <th scope="col">Forfait</th>
            <th scope="col">Crédit</th>

          </tr>
        </thead>
        <tbody id="result">
          <script>
          showCustomers("xxx");
          </script>
        </tbody>
      </table>
    </div>
    <div class="col-12 col-md-4">
      <div id="customerInfos" class="collapse">
        <p>Sélectionnez un client afin d'obtenir les détails et les paramètres associés.</p>
      </div>
    </div>

  </div>

Here is my table loaded :

<tr class="clickable-row <?=$classeStatut?>" data-utiId="<?=$uti_id?>" onclick="showUtilisateur(this);">
  <td data-label="Login">
    <div class="d-flex justify-content-end justify-content-md-start">
      <div>
        <img class="me-2" src="<?=getPP($uti_login)?>" width="20"></img>
      </div>
      <div class="circle <?=$connectionStatus?> d-none d-md-block mt-2 me-1">
      </div>
      <div>
        <?=mhe($uti_login)?>
      </div>
      <?php if ($uti_profil==1) { ?>
        <div>
          <i class="bi bi-shield-check mx-2 text-success"></i>
        </div>
      <?php } ?>
    </div>
  </td>
  <td data-label="# Client"><?=mhe(POSTE_PREFIX . $uti_numposte)?></td>
  <td data-label="Inscrit"><?=mhe(makeFRDate($uti_date_inscription))?></td>
  <td data-label="Dernière co."><?=makeFRDate(mhe($uti_lastaccess))?></td>
  <td data-label="Forfait"><?=mhe($for_nom)?></td>
  <td data-label="Crédit"><span class="<?=$classCredit?>"><?=mhe($uti_credit)?> €</span></td>
</tr>

So when I click on a row of the table, the showUtilisateur() function will be executed and put some infos on the customerInfos div. That works perfectly, but I want to improve the layout of the customerInfos div. On small screens, currently I have to go to the bottom of the page to see the div When I click on a row, I would like to open the div as a new view, and allow to return to the table view with a "back" button. I'm not sure how to define this kind of layout. Thanks for your suggestions



How to improve a variable lens blur in Python OpenCV?

I want to emulate the blur of a cheap camera lens (like Holga). It's not the same as standard Gaussian blur with constant kernel size.

So I'm not sure that I have to use GaussianBlur or 2D filter.

I wrote the code and it works in general.

Input:

Input

Result:

Result

But I feel that it can be done better and faster.
I've found a similar question but it still has no answer.

How to implement different variable kernel shapes and avoid a black border?

import cv2
import numpy as np
import requests

url = r'https://upload.wikimedia.org/wikipedia/en/7/7d/Lenna_%28test_image%29.png'
resp = requests.get(url, stream=True).raw
img = np.asarray(bytearray(resp.read()), dtype="uint8")
img = cv2.imdecode(img, cv2.IMREAD_COLOR)

img_lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
l, a, b = cv2.split(img_lab)

# make blur map 
height, width = l.shape[:2]
center = np.array([height/2, width/2])
diag = ((height / 2) ** 2 + (width / 2) ** 2) ** 0.5
blur_map = np.linalg.norm(
    np.indices(img.shape[:2]) - center[:,None,None] + 0.5,
     axis = 0
)
blur_map = blur_map / diag 
blur_map = blur_map * blur_rad
cv2.imwrite('blur_map.png', cv2.normalize(blur_map, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_32F))  

blur_rad = 15

# bluring
l_blur = np.copy(l)
for x in range(width):
    for y in range(height):
        kernel_size = int(blur_map[y, x])
        cut = l[y - kernel_size:y + kernel_size, x - kernel_size:x + kernel_size].mean()
        if kernel_size == 0:
            #print(kernel_size)
            l_blur[y, x] = l[y, x]
            continue
        l_blur[y, x] = cut

cv2.imwrite('result_blur.png', l_blur)  


Python XML Parse errors with Invalid Token

I am moving code from Python 2.7 to Python 3.10. One section of the code creates XML which is 'prettyfied' and written to a file. But parsing in Python 3.x is throwing an error. In one case the problem seems to be with an encoded en-dash character.

<?xml version='1.0' encoding='utf8'?>
<properties>
    <entry key="name">AB&amp;R - RFA #3 \xe2\x80\x93 Alignment</entry>
</properties>

The parsing is done as follows:

xml_parsed = xml.dom.minidom.parseString(xml_string)
return xml_parsed.toprettyxml("    ", "\n")

The error thrown is:

not well-formed (invalid token): line 2

I don't think this problem happened with Python 2.7. There is a nice description about en-dash here (although I think my problem is not limited to en-dash).

What can be done to fix this?



Not able to run Android app || 'Gradle-aware make' is missing in app configuaration

I have cloned a private repo that I can't share. that project is built successfully and I can create debug/release apk as well. But I am not able to run it from the run button in android studio.

When I click on the run button, It is not launching in mobile/emulator. Immediately, It says 'launch successfully' Even if it is not starting Gradle run.

So I compared many settings with a sample app that is running as expected, So I found one difference which is in the app configuration(select the app [near run icon] -> Edit configuration).

That 'Gradle-aware Make' is not there in the 'Before launch' section in the 'General' tab of app configuration. check the screenshot below:

Missing Gradle-aware Make command

In the working sample, It is present and we can remove/add it again and again normally. Check the screenshot below:

Sample Gradle-aware Make command

So How to generate it, Can Someone please help me?



2022-06-25

React Native: Problem with dependencies and archiving my project

I have returned to a project which I haven't worked on in a little while. I added a delete account feature and planned to release it to the apple store that day.

I ran npm install --legacy-peer-deps && cd ios && pod install. This all worked fine (my packages are a little out of date since I haven't worked in this project for a while so I had to user legacy-peer-deps instead).

Next step, I went to XCode and tried to archive my project as usual but got a build failed with the error being Undefined symbol: _swift_stdlib_isStackAllocationSafe - after some research

I found a solution on this post: Symbol(s) not found for architecture arm64 - XCode

There was a really helpful approved answer. I added the suggested code to my podfile and tried to run npm install react-native-purchases@latest as suggested. BUT, I am presented with the following error:

React Native purchase error code

At this point I'm thinking hmmm conflicting dependency let me try to install with npm install react-native-firebase/app and then this shows me the next error.

enter image description here

I try to install babel-plugin-root-import and then it shows me the first image error again and repeats. I'm a bit stuck, I just want to get this project back up and running but its a pain!

Here's my package.json for reference:

{
  "scripts": {
    "start": "react-native start",
    "android": "npx react-native run-android",
    "ios": "npx react-native run-ios --simulator='iPhone 13'",
    "start:development": "NODE_ENV=development npx react-native start",
    "start:production": "NODE_ENV=production npx react-native start"
  },
  "dependencies": {
    "@fawazahmed/react-native-read-more": "^2.2.2",
    "@invertase/react-native-apple-authentication": "^2.1.0",
    "@react-native-community/async-storage": "^1.12.1",
    "@react-native-community/blur": "^3.6.0",
    "@react-native-community/google-signin": "^5.0.0",
    "@react-native-community/masked-view": "^0.1.10",
    "@react-native-community/slider": "^3.0.3",
    "@react-native-firebase/app": "^13.0.1",
    "@react-native-firebase/auth": "^14.2.1",
    "@react-native-firebase/firestore": "^10.5.1",
    "@react-native-firebase/storage": "^13.0.1",
    "@react-navigation/bottom-tabs": "^5.11.7",
    "@react-navigation/native": "^5.9.2",
    "@react-navigation/stack": "^5.14.2",
    "expo": "~40.0.0",
    "expo-app-loading": "^1.0.1",
    "expo-apple-authentication": "~2.2.2",
    "expo-google-app-auth": "^8.1.4",
    "expo-google-sign-in": "~8.4.0",
    "expo-linear-gradient": "~8.4.0",
    "expo-notifications": "~0.8.2",
    "expo-splash-screen": "~0.8.0",
    "expo-status-bar": "~1.0.3",
    "expo-updates": "~0.4.0",
    "firebase": "^8.2.5",
    "lottie-react-native": "^4.0.2",
    "react": "16.13.1",
    "react-apple-signin-auth": "^0.0.7",
    "react-dom": "16.13.1",
    "react-native": "~0.63.4",
    "react-native-calendars": "^1.1254.0",
    "react-native-countdown-circle-timer": "^2.5.0",
    "react-native-device-info": "^8.0.7",
    "react-native-dotenv": "^3.3.1",
    "react-native-fast-image": "^8.5.11",
    "react-native-gesture-handler": "^1.9.0",
    "react-native-google-signin": "^2.1.1",
    "react-native-haptic-feedback": "^1.11.0",
    "react-native-input-scroll-view": "^1.11.0",
    "react-native-keep-awake": "^4.0.0",
    "react-native-linear-gradient": "^2.5.6",
    "react-native-parallax-header": "^1.1.4",
    "react-native-purchases": "^4.1.1",
    "react-native-reanimated": "^1.13.2",
    "react-native-responsive-screen": "^1.4.2",
    "react-native-safe-area-context": "^3.1.9",
    "react-native-screens": "^2.17.1",
    "react-native-sound": "^0.11.1",
    "react-native-svg": "12.1.0",
    "react-native-unimodules": "~0.12.0",
    "react-native-web": "~0.13.12",
    "react-navigation": "^4.4.3",
    "use-sound": "^2.0.1"
  },
  "devDependencies": {
    "@babel/core": "~7.9.0",
    "babel-jest": "~25.2.6",
    "babel-plugin-root-import": "^6.6.0",
    "eslint-import-resolver-babel-plugin-root-import": "^1.1.1",
    "jest": "~25.2.6",
    "metro-react-native-babel-preset": "^0.66.2",
    "prettier": "2.6.2",
    "react-test-renderer": "~16.13.1"
  },
  "private": true,
  "name": "Flexeee-RN",
  "version": "1.0.0"
}

PODFILE:

require_relative '../node_modules/react-native/scripts/react_native_pods'
require_relative '../node_modules/react-native-unimodules/cocoapods.rb'
require_relative '../node_modules/@react-native-community/cli-platform-ios/native_modules'

$FirebaseSDKVersion = '8.9.0'

platform :ios, '11.0'

target 'FlexeeeRN' do
  use_unimodules!
  config = use_native_modules!

  use_react_native!(:path => config["reactNativePath"])

  # Uncomment the code below to enable Flipper.
  #
  # You should not install Flipper in CI environments when creating release
  # builds, this will lead to significantly slower build times.
  #
  # Note that if you have use_frameworks! enabled, Flipper will not work.
  #
  #  use_flipper!
  #  post_install do |installer|
  #    flipper_post_install(installer)
  #  end
end

post_install do |installer|
  installer.pods_project.targets.each do |target|
   target.build_configurations.each do |config|
    config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '11.0'
   end
  end
 end

 post_install do |installer|
    react_native_post_install(installer)
    fix_library_search_paths(installer)
  end
end

def fix_library_search_paths(installer)
  def fix_config(config)
    lib_search_paths = config.build_settings["LIBRARY_SEARCH_PATHS"]
    if lib_search_paths
      if lib_search_paths.include?("$(TOOLCHAIN_DIR)/usr/lib/swift-5.0/$(PLATFORM_NAME)") || lib_search_paths.include?("\"$(TOOLCHAIN_DIR)/usr/lib/swift-5.0/$(PLATFORM_NAME)\"")
        # $(TOOLCHAIN_DIR)/usr/lib/swift-5.0/$(PLATFORM_NAME) causes problem with Xcode 12.5 + arm64 (Apple M1)
        # since the libraries there are only built for x86_64 and i386.
        lib_search_paths.delete("$(TOOLCHAIN_DIR)/usr/lib/swift-5.0/$(PLATFORM_NAME)")
        lib_search_paths.delete("\"$(TOOLCHAIN_DIR)/usr/lib/swift-5.0/$(PLATFORM_NAME)\"")
        if !(lib_search_paths.include?("$(SDKROOT)/usr/lib/swift") || lib_search_paths.include?("\"$(SDKROOT)/usr/lib/swift\""))
          # however, $(SDKROOT)/usr/lib/swift is required, at least if user is not running CocoaPods 1.11
          lib_search_paths.insert(0, "$(SDKROOT)/usr/lib/swift")
        end
      end
    end
  end

  projects = installer.aggregate_targets
    .map{ |t| t.user_project }
    .uniq{ |p| p.path }
    .push(installer.pods_project)

  projects.each do |project|
    project.build_configurations.each do |config|
      fix_config(config)
    end
    project.native_targets.each do |target|
      target.build_configurations.each do |config|
        fix_config(config)
      end
    end
    project.save()
  end
end

Any help is appreciated. Thanks for your time.

UPDATE: I did npm install react-native-purchases@latest --legacy-peer-deps as suggested below (seems to have worked). But now, when I run pod install it doesn't like the code I added into the Podfile and says:

[!] Invalid Podfile file: syntax error, unexpected end, expecting end-of-input.

 #  from /Users/jacksaunders/Flexeee-V2/ios/Podfile:40
 #  -------------------------------------------
 #    end
 >  end
 #  
 #  -------------------------------------------

I've double checked the code I've added and its the exact same. Unsure, what to do from here.



Kafka Schema Registry - block messages that aren't accepted in the schema registry in kafka

I have a schema in kafka and I need that every time I post a post in this topic, the schema that I registered checks if it is in the same pattern that is being sent.

My schema is: enter image description here

Curl post:

curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema": "{\"type\":\"record\",\"name\":\"Operacao\",\"namespace\":\"data.brado.operacao\",\"fields\":[{\"name\":\"id_operacao\",\"type\":\"string\"},{\"name\":\"tipo_container\",\"type\":\"string\"}, {\"name\":\"descricao_operacao\",\"type\":\"string\"},{\"name\":\"entrega\",\"type\":\"string\"},{\"name\":\"coleta\",\"type\":\"string\"},{\"name\":\"descricao_checklist\",\"type\":\"string\"},{\"name\":\"cheio\",\"type\":\"string\"},{\"name\":\"ativo\",\"type\":\"string\"},{\"name\":\"tipo_operacao\",\"type\":\"string\"} ]}"}' http://localhost:38081/subjects/teste/versions

What I need is that when I make a post in the topic it doesn't allow me to send it if it doesn't have this pattern

I was supposed to accuse an error here, because I'm not sending the right schema enter image description here

And it would work in that caseenter image description here

Can anyone help me how to do this check in schema? I've looked everywhere I've found and I haven't found any answers to this.



Twitter DM API not returning all messages

I’m calling the 1.1 direct messages events list endpoint as an authenticated user, I’ve send a few messages to the accompanying account from two different other twitter accounts. However I’m only getting the message_create results for the messages from one of the senders. Not both.

When I’m signed in in the browser on Twitter, I do see the DM’s from both accounts.

Since the number of direct messages this account has is < 20, I also do not get a next_cursor property. So there is no "next page" which contains the messages.

All messages have been send within the last week, so it’s not that the 30 day limit is in play here.

The (shortened) response I get back:

{
    "events": [{
        "type": "message_create",
        "id": "1537138057991884809",
        "created_timestamp": "1655317246326",
        "message_create": {
            "target": {
                "recipient_id": "114729189"
            },
            "sender_id": "1520424127873654786",
            "source_app_id": "3033300",
            "message_data": {
                "text": "An image",
                "entities": {
                    "hashtags": [],
                    "symbols": [],
                    "user_mentions": [],
                    "urls": []
                }
            }
        }
    }, 
    
    // There are more events, but all with the same sender_id.
    // But i have two different people who send a direct message (not a group message) to me.
    ],
    "apps": {
        "3033300": {
            "id": "3033300",
            "name": "Twitter Web App",
        }
    }
}

enter image description here

See the above screenshot, where you can see I can read both dm’s in Twitter itself.

Am I missing something, or is there some limitation on this endpoint I do not know about? There don't really seem to be any arguments I can pass according to the documentation.



Can I print pie charts using a for loop in Jupyter?

I need to use for loops to print multiple pie charts within a function, preferably horizontally. My assumption was, if I use a for loop to print pie charts, all the charts will be produced but the results would be shown vertically. However, only the very last figure is shown.

import matplotlib.pyplot as plt
for i in range(3):
    labels = ['part_1','part_2','part_3']
    pie_portions = [5,6,7]
    plt.pie(pie_portions,labels=labels,autopct = '%1.1f%%')
    plt.title(f'figure_no : {i+1}')


importing Amplify throws error "null is not an object (evaluating 'keys.filter)" in react native app

Any time I import Amplify into my React Native project's App.js file, I get the following error:

TypeError: null is not an object (evaluating 'keys.filter')

Here is how I'm importing it:

enter image description here

I run the project using Expo only. If I comment the import Amplify line out, any other files which use anything related to Amplify cause the same error to occur.

Initially, when I was loading this project for the first time, I had other errors to deal with like first needing to create the aws-exports.js file. I copied this over from an old project (Because this is meant to be a re-do of another project that's already set up). Once I included that file I had to update a few lines in that file because of an improper reference to Linking from expo. Once I fixed that, it throw this error I'm referencing here. Now, even if I delete the aws-exports file it will throw this error as soon as Amplify is imported into the App.js file.

enter image description here

enter image description here

-- Update

I've found where the error is occurring. Some of my code gets executed but the error happens inside of the reactnative.js file when syncing between two memory software.

I've tried to reproduce this error inside a fresh react application by copying the package.js file and then importing Amplify into the App.js file but it doesn't throw this error.

Here is a screenshot of where the error is taking place. I'm still trying to figure out how to pinpoint where in my code this error begins.

enter image description here



Save content of multiple div's as a PDF file by clicking HTML button

I have a button 'Generate PDF', and I want it to save some div elements as a PDF file. From what I read, I best use JavaScript, but I have zero experience with JS. I already tried third party solutions like html2pdf, pdfmyurl, jsPDF, etc., but I didn't manage to get it to work like I want it.

This is the structure of the HTML file:

<!DOCTYPE html>
<html>
<head>
    <title>This is my title</title>
</head>

<body>

    <!-- HEADER --> 
        <div class="header">
            <img src="../logo/logo_17.png" width="320" height="100" />
        </div> 

    <!-- MENU -->  
        <div class="topnav" id="myTopnav">
            Menu...
        </div>  

    <!-- KOLOMMEN --> 
    <div class="row">
    
        <!-- LINKSE KOLOM -->        
            <div class="prodtitle" id="htmlContent1">Title</div>
            
            <div class="prodbuttons">
                <a class="h2button" href="#">BACK TO LIST</a>
                <button id="generatePDF" class="h2button">generate PDF</button>
            </div>
            
            <div class="prodinfotitle">Info/Description</div>
            
            <div class="prodinfo" id="htmlContent2">
                ...some text
            </div>  
            
            <div class="prodmaterialstitle">Materials/Ingredients</div>
            
            <div class="prodmaterials">
                ...some text 
            </div>
              
            <div class="gallery">            
                ...image 1 (if available)
            </div>
            
            <div class="gallery">
                ...image 2 (if available)
            </div>  
            
        <!-- RECHTSE KOLOM -->         
            <div class="side">
            ... 
            </div>      
    </div>    

    <!-- FOOTER --> 
        <div class="footer">
        ...footer
        </div>
</body>
</html> 

Now this is what I'm looking for:
When you click the 'Generate PDF' button, a PDF file should be created with the following div's: 'header', followed by the 'prodtitle' and then followed by 'prodinfotitle', 'prodinfo', 'prodmaterialstitle', 'prodmaterials' and the picture(s), if they're available. The PDF should then be saved as '[prodtitle].pdf', so that every file has the right title.

Can somebody help me with this piece of code and how to set it up?

Thanks very very much!

Update: Something I tried with my zero experience was this:

<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/jspdf/1.3.4/jspdf.min.js"></script>
    
    <script type="text/javascript">
        var doc = new jsPDF();
        var specialElementHandlers = {
            '#editor': function (element, renderer) {
            return true;
            }
        };
 
 
    $('#generatePDF').click(function () {
        doc.fromHTML($('#htmlContent1').html(), 15, 15, {
            'width': 700,
            'elementHandlers': specialElementHandlers
            });
        doc.fromHTML($('#htmlContent2').html(), 15, 15, {
            'width': 700,
            'elementHandlers': specialElementHandlers
            });
        doc.save('sample_file.pdf');
        });
    </script>

It didn't work because I got an empty pdf. I don't even know if the code is correct.



2022-06-24

Issues with Excel Textjoin, need a work around

I am trying to figure out how to setup a couple excel sheets that combs through ALOT of information. I can not display the actual excel sheet due to what's in it but I have recreated an excel that would do the same thing.

Excel Example

I am trying to get the dates to populate automatically when the CountIF finds something... Say, Blake shows up multiple times in the data range I need it to look through, I need all the dates or at least the most recent date to populate on its own in 1 cell next to the CountIf number or in the same cell as the CountIf.

The cell should display 06/22/2022, 06/29/2022.

I am using Excel 2016, I do not have a built in TEXTJOIN.

@scott, Thank you for responding, I added your TEXTJOIN UDF and I am getting a #VALUE! message in the cell. I tried with a cell that would return 1 date, no date, and multiple dates. The date format in the excel I can not show is 20220623, so I am not sure if that matters or not. I got it to work on the test excel sheet I made for the screen shot just not the one I need it to work on. The only difference is, the sheet im calling the data from is using -'s, numbers and letters in the cell, and the location I am calling the info to pop up on is on a different sheet so the formula looks like

=TEXTJOIN(",",TRUE,IF('SHEET2!E:E="example-A-1",'SHEET2!A:A,"")) 

Using this UDF

Function TEXTJOIN(delim As String, skipblank As Boolean, arr)
    Dim d As Long
    Dim c As Long
    Dim arr2()
    Dim t As Long, y As Long
    t = -1
    y = -1
    If TypeName(arr) = "Range" Then
        arr2 = arr.Value
    Else
        arr2 = arr
    End If
    On Error Resume Next
    t = UBound(arr2, 2)
    y = UBound(arr2, 1)
    On Error GoTo 0

    If t >= 0 And y >= 0 Then
        For c = LBound(arr2, 1) To UBound(arr2, 1)
            For d = LBound(arr2, 1) To UBound(arr2, 2)
                If arr2(c, d) <> "" Or Not skipblank Then
                    TEXTJOIN = TEXTJOIN & arr2(c, d) & delim
                End If
            Next d
        Next c
    Else
        For c = LBound(arr2) To UBound(arr2)
            If arr2(c) <> "" Or Not skipblank Then
                TEXTJOIN = TEXTJOIN & arr2(c) & delim
            End If
        Next c
    End If
    TEXTJOIN = Left(TEXTJOIN, Len(TEXTJOIN) - Len(delim))
End Function


PyQt5: Scaling QLabel to image size

I want to scale my label, because it acts like the frame for the picture. Like you can see in the picture, the frame is just too big. I tried adjustsize() but it didnt work. Maybe its because Im using a layout?

I would really appreciate your help.

5



Views vs Materialize Vs Materialized View in Kusto

Scenario: Data in kusto table is updated after 5 hours. Task: Call query from .net API In query , create a subquery and use that subquery to perform join on a bigger table

let table1=materialize(
Customer|where CustomerId=="cust-reg-aabb-cc232"|distinct CustomerId,City);
CustomerPurchase
|where CustomerId=="cust-reg-aabb-cc232"
//perform join with table1 and other things

or

let table1=view(){
Customer|where CustomerId=="cust-reg-aabb-cc232"|distinct CustomerId,City};
CustomerPurchase
|where CustomerId=="cust-reg-aabb-cc232"
//perform join with table1 and CustomerPurchase

CustomerPurchase and Customer data is being updated after every 5 hours(new rows being added). What is more optimized : create a view or use the materialize method. I went through the documentation but could not understand the different between both.

Also since I am implementing an API, is it possible to use materialized view instead of table1?



The d-neighborhood of the k-mer Pattern is the collection of all k-mers that are at most Hamming distance d from Pattern

The d-neighborhood of the k-mer Pattern is the collection of all k-mers that are at most Hamming distance d from Pattern.

How many 4-mers are in the 3-neighborhood of Pattern = ACGT?

Note that the d-neighborhood of Pattern includes Pattern.



How to do a performance test for a mocha library?

I need to do performance testing for the below Mocha unit test.

describe("simple test", async() =>{

     it("first test" , async()=>{

        assert.equal( 1 , 1);
     })
})

But I do not know which open source tool to use.

Can anyone introduce me to some open source performance testing tools for the Mocha Library?



How to modify ImageFolder in pytorch to return a tensor of a different shape?

How to extend torch.datasets.ImageFolder in pytorch to return a tensor of a different shape?

It currently returns: torch.Size([1, 3, 256, 256]). I want to return [1, 10, 3, 256, 256].

I have a directory with multiple images separated into folders. Each folder has up to 3000 images. I would like to modify the getitem function so that it returns bags of images, where each bag contains 10 images.

Thank you!



2022-06-23

Linux kernel module reference counter is always zero

I am implementing a kernel module that exposes some data to userspase using mmap interface.

I create a file in /proc file system passing struct file_operations with pointers to needed functions:

static struct file_operations module_file_ops = {
    .owner = THIS_MODULE,
    .open = module_open,
    .mmap = module_mmap
};

proc_create(THIS_MODULE->name, 0444, NULL, &module_file_ops);

Userspace application is able to open and read from the file (mmap contents) as expected.

When I do lsof I see the file is opened by the userspace app.

However, lsmod always gives zero as usage counter despite I set .owner to THIS_MODULE, so that I can easily remove the module with rmmod and lead the system to crash.

Please advise.


module.c

#include <linux/fs.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/proc_fs.h>

static struct proc_dir_entry *proc_file;

static const struct file_operations test_file_ops = {
    .owner = THIS_MODULE
};

static int __init initialize(void) {
    int error = 0;

    proc_file = proc_create(THIS_MODULE->name, 0444, NULL, &test_file_ops);
    if (!proc_file) {
        error = -EIO;
    }

    return error;
}

static void __exit teardown(void) {
    proc_remove(proc_file);
}

module_init(initialize);
module_exit(teardown);

Makefile

obj-m += test.o
test-objs := module.o

CC=gcc
KDIR := /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)
EXTRA_CFLAGS=-I/usr/include -I/usr/include/x86_64-linux-gnu

all:
        $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules


Find all JS classes used in web page [duplicate]

Is there a way to find every unique class definition used in a WebPage? For example, if a web page has three unrelated JS classes Dog, Cat, and Mouse, but I do not know that these classes exist, how would I discover them?

//returns all the JS classes in a document
function getAllDocumentClasses(){

}

This would be a function I would run in the chrome console.

Context: I'm preparing some documentation on extracting data from web pages using javascript. One approach to do this is to investigate the classes used in a library, for example, to see all the instances of a Highcharts and then extract the data used in that Highcharts instances. But if we do not know which libraries a web page is using, then this is more difficult.

Edit: Seems like this might not be possible with javascript. But if you go to Chrome Developer Tools and type window., then you can browse through available classes in the window.

window



Declaring runtime exceptions

What is the guideline for methods declaring runtime exceptions?

Let's say I call a 3rd party routine that throws SQLException. Is it permissible/standard/acceptable for that routine to be able to throw RuntimeExceptions without declaring that it does so?

As always, I am surprised by the confusion my questions cause :-D This is probably because I am confused.

In the following, the callable is a lambda that issues a commit, and this throws SQLException. callable.call throws Exception.

private doThis(Callable<T> callable) throws SQLException {
    try {
        return callable.call();
    } catch (SQLException e) {
        // do stuff
        throw e;
    } catch (Exception e) {
        break; // Eats any exception from call() which makes me scream internally.
    }
}

What I surmise from this is that the coder wanted doThis to throw a SQLException. However, the nature of using the Callable meant that the routine had to throw an Exception unless the coder did something. So he caught Exception and swallowed it. But Exception is the parent is RuntimeException, so we're eating those too.

What am I to do? Making doThis throw Exception seems clumsy and random. Wrapping any exception being thrown in a RuntimeException and raising that preserves the coder's intent but seems suspect.

EDIT -

ok, I have learned, thank you. Now the question is, what to do about it. Clearly, eating the exception is a bad idea. Diluting the SQLException as declared seems not great.

How does the collective wisdom of SO feel about wrapping the Exception in a RuntimeException?

    ...
    } catch (Exception e) {
        throw new RuntimeException(e);
    }


How can I fix the Cmake link problem in macOS

I have installed the latest cmake in my MacBook.I also input "brew link cmake". But it still display :

zsh: command not found: cmake

How can I fix it?



Why does cin.fail() ignore periods and commas in "while" loop? What's a simple way to error check input?

Sorry it sounds stupid, but I'm just getting started in C++ programming...
My input checking loop works fine if I want it to accept 'float' numbers, but outputs some weird results if it has to accept only 'int' numbers.
EDIT: Tutorials I read/watched assumed that user wouldn't input things like that, which is dumb...

#include <iostream>
#include <limits>
using namespace std;

// input with check
int num(int i) {
    int x=0;
    cout<<"Enter #"<<i+1<<": ";
    cin>>x;
    while(cin.fail()) {
        cin.clear();
        cin.ignore(numeric_limits<streamsize>::max(),'\n');
        cout<<"Enter INTEGER #"<<i+1<<": ";
        cin>>x;
    }
    return x;
}

int main() 
{
    int i, min=0, max=0, n=3;
    int arr[3];

    // save input into array
    for(i=0; i<n; i++) { arr[i] = num(i); }
    
    // find 'max' & 'min'
    max = arr[0];
    for(i=1; i<n; i++) {
        if(max < arr[i]) { max = arr[i]; }
    }
    min = arr[0];
    for(i=1; i<n; i++) {
        if(min > arr[i]) { min = arr[i]; }
    }
    cout << endl << "MAX = " << max;
    cout << endl << "MIN = " << min;
    return 0;
}


For example if I type in 5.2 it takes in the 5, skips one loop and puts out error message on the next loop.
Output:
Enter #1: 2
Enter #2: 5.5
Enter #3: Enter INTEGER #3: g
Enter INTEGER #3: -2
MAX = 5
MIN = -2

EDIT2: I did some digging and I found something that kind of works for my use and is simple enough for my skill level, but still has some quirks. Function just filters out anything that isn't characters you specify between the quotes and checks if '-' is as the first character.

bool isNumeric(string const &str) {
    if(str.find_last_of("-")==0 || str.find_last_of("-")==SIZE_MAX) {
        return !str.empty() && str.find_first_not_of("-0123456789") == string::npos;
    } else { return 0; }
}
int num(int i) {
    string str;
    cout << "Enter #" << i+1 << ": ";
    getline(cin,str);
    while(isNumeric(str)==0) {
        cout << "Enter INTEGER #" << i+1 << ": ";
        getline(cin,str);
    }
    return stoi(str); // convert string to integer
}

Source: https://www.techiedelight.com/determine-if-a-string-is-numeric-in-cpp/ - method #3



How can I read the color of autoshape in python-pptx?

enter image description here

Colorful circles are the autoshape type in python-pptx, Now I want to read the color(RGB or whatever) of them

What I got in documentation is use

shape.fill.solid() 
shape.fill.fore_color.rgb

But It pops

AttributeError: no .rgb property on color type '_NoneColor'

Any idea to access the color of autoshape?

(BTW, Best to post the color change method cause I read color for change it by HSV adjustment)



2022-06-22

Webelement input not interactable (when choosing a specific one from list)

I'm trying to find a specific input and sendkeys to it on this website. I have no problem locating it in the browser's console with for example (//input)[4] . However, when I'm trying to find it and sendkeys in my IDE (using Java with Selenium) I got the org.openqa.selenium.ElementNotInteractableException: element not interactable* exception. I tried to do it with following methods:

  1. With simple findelement and sendkeys:

    WebElement ebookAddInput = driver.findElement(By.xpath("(//input)[4]"));
    ebookAddInput.clear();
    ebookAddInput.sendKeys("4");
    
  2. Using list of items to choose specific one:

    List<WebElement> allInputs = driver.findElements(By.xpath("//input[@name='commerce-add-to-cart-quantity-input']"));
    System.out.println(allInputs.size());
    allInputs.get(4).sendKeys("4");
    

In both cases I got the mentioned error. The only way it works is through not specifying which input i want to choose and then there is no error and first input on the website gets filled:

WebElement ebookAddInput = driver.findElement(By.xpath("//input[@name='commerce-add-to-cart-quantity-input']"));
ebookAddInput.clear();
ebookAddInput.sendKeys("4");

Does someone know why is that happening and if theres way to solve it? Will really appreciate the help.



How do I put the focus on the QuickFix window after automatically opening the QuickFix list?

I have the following in my .vimrc to automatically open/close the QuickFix window after running :make :

  augroup quickfix
    autocmd!
    autocmd QuickFixCmdPost [^l]* cwindow
    autocmd QuickFixCmdPost l*    lwindow
  augroup END

It works fine, but when the autocmd opens the QuickFix window, it does not put the focus on the window. Is there any way to automatically put the focus on the QuickFix window after the autocmd opens it?



Does SML provide an efficient immutable list implementation for very large collections, or should arrays and mutation be used for this optimization?

Every time we do cons and destructuring and similar operations on lists, we create copies of the original lists. This can become a performance issue with very large collections.

It's my understanding that to improve this situation, some programming languages implement lists as data structures that can be copied much more efficiently. Is there something like this in SML? Perhaps in the language definition, or as a feature that is implementation dependent?

If there's no such data structure, are arrays and mutability one pattern that optimizes on large lists? As long as the state is local to the function, can the function still be considered pure?

SML is multi-paradigm, but idiomatic SML is also functional-first, so both "lists with efficient copying" and "mutable arrays" approaches should make sense, depending on what the core language offers.

Is there an immutable data structure that is more efficient than the normal singly linked list for very large collections? If not, is there a native purely functional data structure that can optimize this scenario? Or should mutability and arrays be used internally?



Using numpy.where to calculate new pandas column, with multiple conditions

I have a problem with regards as to how to appropriately code this condition. I'm currently creating a new pandas column in my dataframe, new_column, which performs a subtraction on the values in column test, based on what index of the data we are at. I'm currently using this code to get it to subtract a different value every 4 times:

subtraction_value = 3
subtraction_value = 6

data = pd.DataFrame({"test":[12, 4, 5, 4, 1, 3, 2, 5, 10, 9]} 


data['new_column'] = np.where(data.index%4,
                              data['test']-subtraction_value,
                              data['test']-subtraction_value_2)
print (data['new_column']


[6,1,2,1,-5,0,-1,3,4,6]

However, I now wish to get it performing the higher subtraction on the first two positions in the column, and then 3 subtractions with the original value, another two with the higher subtraction value, 3 small subtractions, and so forth. I thought I could do it this way, with an | condition in my np.where statement:

data['new_column'] = np.where((data.index%4) | (data.index%5),
                              data['test']-subtraction_value,
                              data['test']-subtraction_value_2)

However, this didn't work, and I feel my maths may be slightly off. My desired output would look like this:

print(data['new_column'])

[6,-2,2,1,-2,-3,-4,3,7,6])

As you can see, this slightly shifts the pattern. Can I still use numpy.where() here, or do I have to take a new approach? Any help would be greatly appreciated!



Git suddenly won't authenticate to GitHub automatically

I'm relatively new to git. I've been working on projects, pushing to and pulling from my GitHub repos. After I set everything up, all I needed to do was type git push origin main in the bash terminal and everything worked.

Earlier today, I wanted to push to a remote repo on GitHub. I typed git pull origin main exactly as I have a hundred times, but suddenly I get the error "fatal: Unable to persist credentials with the 'wincredman' credential store."

I've tried everything I could find. I unset my credential manager, deleted old credentials in Windows Credential Manager, tried creating a PAT. I finally managed to successfully pull/push using the PAT, but now I have to manually input my username and PAT every single time.

I have no idea what happened, but I would love to be able to push/pull from remote without needing to login.



OsmDroid: User agent doesn't work with Fragments

Every example available on the internet tells us to use this line of code to get the user agent (without it the map won't load).

osm.osmdroid.config.Configuration.getInstance()
            .load(applicationContext, this.getPreferences(Context.MODE_PRIVATE))

The problem is that applicationContext yields "Unresolved reference". Same goes for this.getPreferences().

I've tried to put that code into MainActivity (no fragments) and it works. Is there any way to replace it in this context?



2022-06-21

How to close an element by clicking outside of it?

I'm currently trying to close an element when its style is different than "display = none". I'm having an error in the console telling me that lists. some aren't a function so I may not have understood well the "some" method.

More Infos on what I want : Given that I have 3 lists (in lists), when I click outside of it or its elements I want to close all the lists)

Thanks in advance

const lists = document.querySelectorAll(".list");

function closeList() {
    document.addEventListener("click", () => {
        if(lists.some((list) => list.style.display != "none")) {
            return lists.style.display = none;
        } else return;
    });
};


Dynamic group by list data indexes

I want to group by dynamically according to more than one index in a list.

Model

public class MyModel
{
    public string baseTableName { get; set; }
    
    public List<object> columns { get; set; }
    
    public List<List<object>> rows { get; set; }
}

I need to find the indexes of the pk columns list in this model. Then I want to group the data in the rows list by pk columns.

Sample Request

{
    "tableName": "fooTable",
    "columns": [
        "c1",
        "c2",
        "c3"
    ],
    "rows": [
        [
            "1",
            "a",
            "kobe"
        ],
        [
            "2",
            "a",
            "lebron"
        ]
    ]
}

In this sample request, columns c1 and c2 are pk.

Codes

// Returns indexes of pk columns.
var pkColumnIndexes = GetPkIndexes(columns);

var gp = requestObject.rows.GroupBy(gp => new { indx1 = gp[pkColumnIndexes[0]], indx2 = gp[pkColumnIndexes[1]] });
var groupData = gp.Select(s => s.LastOrDefault()).ToList();

--

(gp => new { indx1 = gp[pkColumnIndexes[0]], indx2 = gp[pkColumnIndexes[1]] })

I want to group this part according to indexes.

Can group by keys dynamically created with a for loop?



CSS file linking issue [closed]

Why is my CSS file not linked when the HTML file is inside a folder within the folder that contains both folders? I tried deleting the main folder address from the link and it still persists.

https://imgur.com/a/BLfHYaj

*The CSS file is OK and works only when the HTML file is not in a separate folder.



Loop error while converting Text to Number across all sheets with VBA

I'd like your help to repeat the below code to all file sheets to convert "Column B" from text to number. I have tried next for function but it gives me error every time I run it.

Screen shot from the file sample


    Sub convert ()
    
    [B:B].Select
        With Selection
            .NumberFormat = "General"
            .Value = .Value
        End With
    End Sub

Screenshot:

enter image description here



Quickbase: Query multiple table to fetch data in single API

I am looking for body format in quickbase /records/query where i can query mutiple table in single API. Right now, i have to use 4 api in quering 4 different table. Is there any way to do that in single api call?



Proper way to pre-fill a sender's signature in bulk?

I currently have a working DocuSign integration going using embedded signing. An administrator logs in, provides consent once (auth code grant, extended scope) and the app can generate sign requests in their name to different users without them needing their own DocuSign accounts.

What I'm trying to build now though is for the administrator to sign as well, and preferably only once for the same document which is then sent to different users. So I'd like to make an embedded signing view for the admin to sign once, then use this same half-signed document/template to create multiple embedded signing views for different users. My client would like the DocuSign audit trail to show that the administrator signed each document, so just pre-filling it with an image won't do.

I've been looking at templates, but haven't been able to find examples of or methods for including a sender signature in a template, or embedded template editing. I'm using the eSign package for C# by the way.

So my question is, is this even possible, and if so what's the proper way to achieve it?



Strikethrough text in a Manim Table

Here is the source code of the documentation page of Table in Manim:

class TableExamples(Scene):
    def construct(self):
        t0 = Table(
            [["This", "is a"],
            ["simple", "Table in \n Manim."]])
        t1 = Table(
            [["This", "is a"],
            ["simple", "Table."]],
            row_labels=[Text("R1"), Text("R2")],
            col_labels=[Text("C1"), Text("C2")])
        t1.add_highlighted_cell((2,2), color=YELLOW)
        t2 = Table(
            [["This", "is a"],
            ["simple", "Table."]],
            row_labels=[Text("R1"), Text("R2")],
            col_labels=[Text("C1"), Text("C2")],
            top_left_entry=Star().scale(0.3),
            include_outer_lines=True,
            arrange_in_grid_config={"cell_alignment": RIGHT})
        t2.add(t2.get_cell((2,2), color=RED))
        t3 = Table(
            [["This", "is a"],
            ["simple", "Table."]],
            row_labels=[Text("R1"), Text("R2")],
            col_labels=[Text("C1"), Text("C2")],
            top_left_entry=Star().scale(0.3),
            include_outer_lines=True,
            line_config={"stroke_width": 1, "color": YELLOW})
        t3.remove(*t3.get_vertical_lines())
        g = Group(
            t0,t1,t2,t3
        ).scale(0.7).arrange_in_grid(buff=1)
        self.add(g)

I'm trying to strikethrough something. To do so I got inspired by this documentation page. Hence why, I've tried:

class TableExamples(Scene):
    def construct(self):
        t0 = Table(
            [['``<span strikethrough="true" strikethrough_color="red">This</span>``', "is a"],
            ["simple", "Table in \n Manim."]])
        t1 = Table(
            [["This", "is a"],
            ["simple", "Table."]],
            row_labels=[Text("R1"), Text("R2")],
            col_labels=[Text("C1"), Text("C2")])
        t1.add_highlighted_cell((2,2), color=YELLOW)
        t2 = Table(
            [["This", "is a"],
            ["simple", "Table."]],
            row_labels=[Text("R1"), Text("R2")],
            col_labels=[Text("C1"), Text("C2")],
            top_left_entry=Star().scale(0.3),
            include_outer_lines=True,
            arrange_in_grid_config={"cell_alignment": RIGHT})
        t2.add(t2.get_cell((2,2), color=RED))
        t3 = Table(
            [["This", "is a"],
            ["simple", "Table."]],
            row_labels=[Text("R1"), Text("R2")],
            col_labels=[Text("C1"), Text("C2")],
            top_left_entry=Star().scale(0.3),
            include_outer_lines=True,
            line_config={"stroke_width": 1, "color": YELLOW})
        t3.remove(*t3.get_vertical_lines())
        g = Group(
            t0,t1,t2,t3
        ).scale(0.7).arrange_in_grid(buff=1)
        self.add(g)

But it doesn't work. I've also tried using LaTeX tags, but Manim doesn't understand it as LaTeX because (I think?) this is interpreted as Text and not Tex. Any ideas?



2022-06-20

Infinitiy NumberFormatException when creating BigDecimal

I have a function that formats a float value into USD currency format, but when the values are fractions of a dollar they get formatted to the last two non-zero decimals.

public static String currency(float number) {

    NumberFormat currencyFormatter = NumberFormat.getCurrencyInstance();
    currencyFormatter.setCurrency(Currency.getInstance("USD"));

    String[] floatParts = new BigDecimal(number).toPlainString().split("\\.");

    if(number < 1 && floatParts.length == 2) {

        String decimalPortion = floatParts[1];
        int numDecimalPlaces = 0;
        while (decimalPortion.charAt(numDecimalPlaces) == '0')
            numDecimalPlaces++;

        if(numDecimalPlaces > 2)
            currencyFormatter.setMaximumFractionDigits(numDecimalPlaces + 2);
        else
            currencyFormatter.setMaximumFractionDigits(2);

    } else {
        currencyFormatter.setMaximumFractionDigits(2);
    }

    return currencyFormatter.format(number);
}

I'm using Firebase Crashlytics and I'm getting this exception:

java.lang.NumberFormatException, Infinity or NaN: Infinity, exception at:

    String[] floatParts = new BigDecimal(number).toPlainString().split("\\.");

What is causing this excpetion? If a non-numerical value was passed to this function, wouldn't the exception occur prior to reaching this line of code?

EDIT

The calling method:

@SerializedName("current_price")
@Expose
@ColumnInfo(name = "current_price")
private float currentPrice;

public void setCurrentPrice(float currentPrice) {

    if(currentPrice < 0.0001) {
        priceName = "$" + NumberFormatter.roundToLastDecimalDigits(currentPrice, 3);
    } else {
        priceName = NumberFormatter.currency(currentPrice);
    }

    this.currentPrice = currentPrice;
}


SUM and multiply values in one row based on text in 1st column in excel

My goal is simple, but I can't find the solution as I'm not sure how to call it in general. I have a table with names of teams (column 1) and amount of items (column 3) which are assigned to them. Table is some kind of 'outcome' list:

Team A Date 200 pcs 2
Team B Date 300 pcs 4
Team A Date 20 pcs 6

and so on.

I would like to ask what function should I use if I require advanced return as example: if <A1:A> "Team B" then in <C1:C> * <D1:D> (which is also filtered only for current team). Function SUMIF (range, criteria, sum range) is something that is suitable for my request, but multiplying messed this up .

Is there anything where I can set range and criteria and just be sure I'm working only with Team A in my formula?



How to customize barchart in plotly R?

On barchart: 'sale' and 'share' variables are visualized via bars, while 'cost' by a red line. Now I want to drop/remove this red line and keep only numbers into boxes and add corresponding variable in the legend map. Moreover I want to add average value of 'share' as a horizontal line on the Y axis

df <- data.frame (model  = c("A", "B", "C","D","E","F"),
                      share = c(12,20,15,9,60,20),
                      sale = c(16,25,18,14,67,28),
                      cost = c(14,19,28,24,57,28))

#set levels of model by cost
df$model <- factor(df$model, levels = arrange(df, desc(df$cost))$model)

library(tidyverse)

df_long <- df %>% 
  pivot_longer(
    cols = -model
  ) 


df_long %>% 
  filter(name != "cost") %>% 
  plot_ly(x = ~model, y = ~value, color = ~name, type = "bar", 
          customdata = ~name,  colors = c("blue", "gray"),
          hovertemplate = paste0("Model: %{x}<br>Value: %{y}<br>",
                                 "Name: %{customdata}<extra></extra>")) %>%
  add_lines(inherit = F, data = df, x = ~model, 
            y = ~cost, color = I("red"),
            name = "cost",
            hovertemplate = paste0("Model: %{x}<br>Value: %{y}<br>",
                                   "Name: cost<extra></extra>")) %>% 
  add_annotations(data = df, x = ~model, y = ~cost, text = ~cost,
                  bgcolor = "white", bordercolor = "black", 
                  xshift = 15, yshift = 15, showarrow = F) %>% 
  layout(barmode = "group")

enter image description here



Functional composition of Optionals

I have 2 Optionals (potentially containing objects) that I would like to combine so that I get the following results:

            ||       first operand 
second      ++-------------+-------------
operand     ||    empty    | optional(x)
============||=============|=============
empty       ||    empty    | optional(x)
------------++-------------+-------------
optional(y) || optional(y) |optional(x+y)

In other words, a non-empty Optional always replaces/overwrites an empty one, and two non-empty Optionals are combined according to some + function.

Initially, I assumed that the standard monadic flatMap() method would do the trick, but (at least in Java) Optional.flatMap() always returns an empty optional when the original Optional was already empty (and I'm not sure if any other implementation would comply with the Monad Laws).

Then, as both operands are wrapped in the same monadic type, I figured that this might be a good job for an Applicative Functor. I tried a couple different functional libraries, but I couldn't implement the desired behavior with any of the zip/ap methods that I tried.

What I'm trying to do seems to me a fairly common operation that one might do with Optionals, and I realize that I could just write my own operator with the desired behavior. Still, I am wondering if there is a standard function/method in functional programming to achieve this common operation?

Update: I removed the java tag, as I'm curious how other languages handle this situation



How to optimize the performance of the following SQL query?

For the following query , I tried creating index on table tmp.req_index_cont_t with columns indcont_key_1 and ind_no but still optimizer is performing full table scan , I am not getting how to resolve this issue , please guide :

CREATE INDEX tmp.REQ_CONT_T_IDX 
            ON tmp.req_index_cont_t (ind_no, indcont_key_1);

select distinct
            rit.item_no as item_no,
            rit.item_type as item_type,    
            rit.ind_no as ind_no,
            rit.delete_date as req_ind_delete_date,
            indcnt.delete_date as req_ind_cont_delete_date
from
    tmp.req_index_cont_t indcnt,
    tmp.req_index_t rit
where    rit.ind_no             = indcnt.ind_no
    and   indcnt.indcont_key_1 <> 'DN'
    and   rit.ind_state         = 'Approved'
group by    rit.item_no,
            rit.item_type,
            rit.ind_no,
            indcnt.delete_date,
            rit.delete_date ;