2022-08-31

ARMember: Hook to Change Form Field value

How do I change the case/value of an ARMember form field before it creates the subscriber ?

I can make the desired changes in $posted_data which is passed to the various hooks tried, but the changes did not persist outside of my_function.

Tried global $posted_data in my_function and &$posted_data in my_function passed parameters.

It must be something simple. Thanks.



How to set required to a custom select option?

I want to show message required when user don't set a choice.

This is my code:

var options = document.querySelectorAll('.myOptions');
var selecText = document.querySelector('.selectFeld>p');
var mylist = document.querySelector('.list_contrat');
//var iconSelect = document.querySelector(".icon_typeCont_rota");
var valueTypeContra = document.querySelector('#typecontrat');


for(option of options) {
    option.onclick = function() {
        mylist.classList.toggle('myhide');
        //iconSelect.classList.toggle('myRotate');
        selecText.innerHTML = this.textContent;
        valueTypeContra.value = this.getAttribute('data-value'); // get value select option
        
    }
}
<div class="selectFeld" title="Type de contrat">
    <input type="text" name="typeContrat" id="typecontrat" class="d-none" required>
    <p>Type de contrat</p>
    <img src="icon_form/Icon_contrat_deroulant.png" alt="" class="icon_select">
</div>
<ul class="container-optionSelec list_contrat myhide">
    <li class="myOptions" data-value="redaction"><p>Redaction</p></li>
    <li class="myOptions" data-value="assistance"><p>Assistance</p></li>
</ul>

It's a custom select option, the code set the value for the input displayed none



Anchor tag with spans can not be clicked

I can not click on that anchor. I try a couple of possible solutions such as adding another anchor or div inside and outside that anchor, adding a z-index and position. edit: I've added the full code and I realize that z-index must solve the problem. However, I could not find where and how to use z-index.

@import url('https://fonts.googleapis.com/css2?family=Noto+Sans+JP:wght@900&display=swap');
@import url('https://fonts.googleapis.com/css2?family=Catamaran:wght@100&display=swap');

* {
    scroll-behavior: smooth;
    text-decoration: none;
}

div, section {
    margin: 0;
    padding: 0;
}

body {
    margin: 0;
    padding: 0;
    background-color: #000;
    font-family: 'Catamaran', sans-serif;
}

main {
    pointer-events: none;
    height: 100vh;
    width: 100%;
    padding: 0;
    margin: 0;
    position: relative;
    z-index: -1;
}

.hi, .name {
    font-family: 'Noto Sans JP', sans-serif;
    font-size: 170px;
    color: #fff;
    height: fit-content;
    position: absolute;
}

.hi {
    top: 2%;
    left: 5%;
    text-shadow: 1px 1px 15px rgba(0,0,0,0.83);
}

.name {
    right: 7%;
    top: 18%;
}

.me {
    font-family: 'Noto Sans JP', sans-serif;
    font-size: 60px;
    color: #fff;
    height: fit-content;
    position: absolute;
    bottom: 2%;
    left: 5%;
    text-shadow: 1px px 15px rgba(0,0,0,0.83);
}

.background-text {
    width: fit-content;
    background: #12c2e9;
    background: -webkit-linear-gradient(to right, #f64f59, #c471ed, #12c2e9);
    background: linear-gradient(to right, #f64f59, #c471ed, #12c2e9);
    -webkit-background-clip: text;
    -webkit-text-fill-color: transparent;
    background-clip: text;
    color: transparent;
}

.img-me {
    position: absolute;
    left: 50%;
    top: 47%;
    transform: translate(-50%, -50%);
    z-index: -1;
}

.img-me img {
    height: 82vh;
}

.slide-down {
    display: inline-block;
    position: absolute;
    left: 50%;
    bottom: 2%;
    transform: translateX(-50%);
}

.mouse-btn {
    margin: 10px auto;
    width: 40px;
    height: 80px;
    border: 3px solid rgba(122, 122, 124, 0.918);
    border-radius: 20px;
    display: flex;
  }
  
.mouse-scroll {
    width: 20px;
    height: 20px;
    background: linear-gradient(170deg, rgba(122, 122, 124, 0.918), rgb(123, 124, 124));
    border-radius: 50%;
    margin: auto;
    animation: scrolling13 1.5s linear infinite;
}
  
@keyframes scrolling13 {
    0% {
        opacity: 0;
        transform: translateY(-20px);
    }

    75% {
        opacity: 1;
        transform: translateY(18.5px);
    }

    80% {
        opacity: 0.5;
        transform: translateY(18.8px);
    }

    84% {
        opacity: 0.4;
        transform: translateY(19px);
    }

    88% {
        opacity: 0.3;
        transform: translateY(19.2px);
    }

    92% {
        opacity: 0.2;
        transform: translateY(19.4px);
    }

    95% {
        opacity: 0.1;
        transform: translateY(19.6px);
    }

    98% {
        opacity: 0;
        transform: translateY(19.8px);
    }
    
    100% {
        opacity: 0;
        transform: translateY(20px);
    }
}
<main>
        <div class="hi">hi</div>
        <div class="name background-text">I am<br>Eren</div>
        <div class="me">A freelancer<br>developer<br>and a student</div>
        <div class="img-me"><img src="img/businessman-chatting-on-phone.png" alt="Young Man Chatting on Phone Illustration"></div>
        <a href="https://www.youtube.com/" class="slide-down">
            <span class="mouse-btn">
                <span class="mouse-scroll"></span>
            </span>
        </a>
</main>


How to trigger action on UI on adapter and reverse situation from adapter to UI?

Is it possbile to stop the flow after first collecting the data?

When I debug it the first click make the order like:

fromFragment

    private val mutableStateAdapterFlow = MutableStateFlow(-1)


[...]
                vm.updateGoalData(value.id, updatedData) // method to updateMyGoal
                setGoalsAdapter.notifyDataSetChanged()

  • I also tried without notifyDataSetChanged(), beacause of the flow it is triggered anyway.

then in adapter methods in coroutine are triggered multiplte times and it changes the value to +1 and then it shows in UI previous value i.e. daysLeft where 5 then I clicked to add 1 day it goes for 6 value but this coroutine is triggered multiple times and come back to 5.

                addDay.setOnClickListener {
                    onPlusButtonClickedListener(
                        CustomSetGoalsDialogData(
                            item.id,
                            item.goal,
                            item.timeGoal
                        )
                    )
                    mutableAddMinusDayStateFlow.value = item.id
                }

                CoroutineScope(Dispatchers.Main).launch {
                    mutableAddMinusDayStateFlow.collectLatest { it ->
                        idTvItemTimeGoal.text = "$daysLeft days left"
                    }
                mutableAddMinusDayStateFlow.value = -1
            }

IMO the problem is it collectLatest, but I'm not sure what I could use. The weird thing is it works, but after second click it is refreshed. When I click and open any other fragment then come back the data is changed.



Rust: how to assign `iter().map()` or `iter().enumarate()` to same variable

struct A {...whatever...};
const MY_CONST_USIZE:usize = 127;


// somewhere in function

// vec1_of_A:Vec<A> vec2_of_A_refs:Vec<&A> have values from different data sources and have different inside_item types
let my_iterator;
if my_rand_condition() { // my_rand_condition is random and compiles for sake of simplicity
    my_iterator = vec1_of_A.iter().map(|x| (MY_CONST_USIZE, &x)); // Map<Iter<Vec<A>>>
} else {
    my_iterator = vec2_of_A_refs.iter().enumerate(); // Enumerate<Iter<Vec<&A>>>
}

how to make this code compile?

at the end (based on condition) I would like to have iterator able build from both inputs and I don't know how to integrate these Map and Enumerate types into single variable without calling collect() to materialize iterator as Vec

reading material will be welcomed



HTTPS & TCP Traffic Through AWS ALB

I'm quite new to networking, but I have been working on this problem for quite some time with no success.

I have an AWS EC2 instance (Windows Server) hosting a video management web portal. The user should be able to access the web portal through their browser and view video footage (traffic is both HTTP and TCP). The issue is that I am trying to route DNS requests for the web portal through an Amazon application load balancer, forwarded to my EC2, so that I can make use of amazon's certificate manager, as I would like the webpage to be encrypted.

If I access the EC2 directly (with it's IP or DNS), everything works correctly. However, when the traffic routes through the ALB, the video never loads, and I assume this is because the ALB does not pass the TCP traffic through, just the HTTP/HTTPS traffic. If I use a network load balancer to route the traffic then I am able to see the video just fine, the issue here is that there is no way to add my certificate to the NLB and encrypt the traffic. I'm stuck, but I know for someone with more experience than me, this is likely a very simple problem.

Any advice you have would be greatly appreciated. Thank you



2022-08-30

Python POST with nested parameters and X-XSRF-TOKEN failure

I am trying to collect data from the following URL: https://muskegon.policetocitizen.com/Inmates/Catalog.

This relies on a secondary POST to https://muskegon.policetocitizen.com/api/Inmates/3 using an X-XSRF-TOKEN (which appears to be just an XSRF token, available in cookies.

When I try to include the specified parameters and this token, my code is as follows:

import requests
from urllib.parse import urlencode

r = requests.Session()
res = r.get(url)
cookies = res.cookies
cross_ref_token = res.cookies.get("XSRF-TOKEN")

 payload = {
            "FilterOptionsParameters": {
                "IntersectionSearch": "true",
                "SearchText": "",
                "Parameters": []
            },
            "IncludeCount": "true",
            "PagingOptions": {
                "SortOptions": [],
                "Take": 10,
                "Skip": 0
            }
        }

        headers = {
            "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36",
            "X-XSRF-TOKEN": cross_ref_token,
            "Content-Type": "application/json;charset=UTF-8",
            "Accept": "application/json, text/plain, */*"
        }

res = r.post(self.api_url, headers=headers, params=urlencode(payload), cookies=cookies)

Including the above, I'm still receiving a 500. Not sure if the problem is specifically failure to include the nested parameters, or another missing identifier?

Update

Is this potentially related to the nonce attribute of the content security policy?



Run job on all existing Jenkins workers

I have a Job in a pipeline that cleans up docker images. It runs the job on each worker individually. This is frustrating because when I add jenkins-cpu-worker3, I'll have to update this job. I'd like to run this job in such a way that it runs on all workers without having to update it each time a new worker is present. I also want the job to be able to run regardless of what I name each worker. It needs to run on all workers no matter what. Is there a way to query jenkins from within the pipeline to get me a list or array of all the workers that exist. I was leafing through documentation and posts online and I have not found a solution that works. If possible I'd like to do this without any additional Jenkins Plugins.

pipeline {
  agent any

  stages {

    stage('Cleanup jenkins-cpu-worker1') {
      agent {
        node {
          label 'jenkins-cpu-worker1'
        }
      }

      steps {
        sh "docker container prune -f"
        sh "docker image prune -f"
        sh '''docker images | awk '{print $1 ":" $2}' | xargs docker image rm || true'''
        sh "docker network prune -f"
        sh "docker volume prune -f"
      }
    }

    stage('Cleanup jenkins-cpu-worker2') {
      agent {
        node {
          label 'jenkins-cpu-worker2'
        }
      }

      steps {
        sh "docker container prune -f"
        sh "docker image prune -f"
        sh '''docker images | awk '{print $1 ":" $2}' | xargs docker image rm || true'''
        sh "docker network prune -f"
        sh "docker volume prune -f"
      }
    }


Why are these 2 queries giving different outputs?

Query no 1:-

SELECT COUNT(ENAME)
FROM EMP
WHERE 
 JOB IN 'MANAGER' 
 OR JOB IN 'ANALYST' 
 AND SAL IN (
    SELECT SAL + NVL (COMM,0)
    FROM EMP
    WHERE SAL LIKE '%0')
GROUP BY JOB;

The Query 1 gives me the following output:-

COUNT(ENAME)
------------
           3
           2

Query no 2:-

 SELECT COUNT(ENAME)
 FROM EMP
 WHERE 
   JOB = ANY (
       SELECT JOB
       FROM EMP
       WHERE JOB IN ('MANAGER', 'ANALYST') 
     )
   AND SAL IN (
       SELECT SAL + NVL (COMM,0)
       FROM EMP
       WHERE SAL LIKE '%0'
     )
 GROUP BY JOB;

The Query 2 gives me the following output:-

COUNT(ENAME)
------------
           2
           2


Save Nested JSON in MySQL Database using Spring Boot

I want to save this nested JSON data in MYSQL DB which has JSON column using Spring Data JPA.

How can I make Entity class for such data's? I don't want to establish any relationship just want to save the data by taking input and should be able to fetch it.

Do I need to create new entity classes for nested objects even if I don't want to establish any relationship between them?

 {
      "Data": [
        {
          "url": "xyz.com",
          "pswd": "admin",
          "user": "admin",
          "Test_Case": "T01"
        }
      ],
      "Page": [
        {
          "Index": "",
          "Property": "",
          "Identifier": "",
          "Data_Column": "url",
          "Description": "",
          "Screenshots": "",
          "User_Action": "LAUNCH",
          "Identifier_Value": ""
        },
        {
          "Index": "",
          "Property": "",
          "Identifier": "ID",
          "Data_Column": "user",
          "Description": "",
          "Screenshots": "",
          "User_Action": "SET",
          "Identifier_Value": "usernameUserInput"
        },
        {
          "Index": "",
          "Property": "",
          "Identifier": "ID",
          "Data_Column": "pswd",
          "Description": "",
          "Screenshots": "",
          "User_Action": "SET",
          "Identifier_Value": "password"
        },
        {
          "Index": "",
          "Property": "",
          "Identifier": "XPATH",
          "Data_Column": "",
          "Description": "",
          "Screenshots": "",
          "User_Action": "CLICK",
          "Identifier_Value": "//*[@id=\"loginForm\"]/div[10]/div[2]/button"
        },
        {
          "Index": "",
          "Property": "",
          "Identifier": "XPATH",
          "Data_Column": "",
          "Description": "",
          "Screenshots": "",
          "User_Action": "CLICK",
          "Identifier_Value": "/html/body/app-root/app-sidebar/section/div/a[8]/img"
        },
        {
          "Index": "",
          "Property": "",
          "Identifier": "XPATH",
          "Data_Column": "",
          "Description": "",
          "Screenshots": "",
          "User_Action": "HIGHLIGHT",
          "Identifier_Value": "/html/body/app-root/div/app-sc-home/span/ol/li/a"
        }
      ]
    }


Dynamically generated tasks in Airflow 2.2.5 are moved to "REMOVED" state and breaks down the GANTT chart

Airflow Version : 2.2.5
Composer Version : 2.0.19

We have a task group which creates the tasks dynamically using for loop. Within the taskgroup we are making use of BigQueryTableDeleteOperator to delete the tables.

Issue: We noticed that once the tables are deleted, all the tasks moved to REMOVED state, hence breaking the GANTT chart with error message of Task not found.

Before the task run : Image 1

enter image description here

After the task runs: Image 2

enter image description here

As shown above, before the taskgroup run it shows all the tables to deleted represented by each task . In this example 2 tasks.

Once the task runs into success and the table is deleted, those tasks are removed.

Sharing the piece of code below :

for table in tables_list:
            table_name = projectid + '.' + dataset + '.' + table
            if table not in safe_tables:
                delete_table_task = bigquery_table_delete_operator.BigQueryTableDeleteOperator( task_id=f"delete_tables_{table_name}",
                                                                                            deletion_dataset_table=f"{table_name}",  
                                                                                            ignore_if_missing=True)
                list_operator += [delete_table_task]

list_operator
print(list_operator)
    
dummy_task >> list_operator

tables_list : List of tables to be deleted

safe_tables : List of tables not to be deleted

Please let me know what we are missing here which is causing the issue.



Select view by cashape layer and change background colour?

How to change selected area by using CAShapeLayer and change this selected area UI view background colour. How to achieve this?



Is there a way to constrain a generic type parameter to generic types?

Is there a way to constrain a generic type parameter to generic types?

//I can constrain a parameter to only object types like this
type GenericType<T extends object> = keyof T ...
//How can I do that for generic types?
type GenericModifier<T extends /* Generic<T> */> = T<...>
//I want to do something like this:
type Distribute<target, type> = type extends infer A ? target<A> : never;

Is that possible?



2022-08-29

How can I apply a linear transformation on sparse matrix in PyTorch?

In PyTorch, we have nn.linear that applies a linear transformation to the incoming data:

y = WA+b

In this formula, W and b are our learnable parameters and A is my input data matrix. The matrix 'A' for my case is too large for RAM to complete loading, so I use it sparsely. Is it possible to perform such an operation on sparse matrices using PyTorch?



Recursive query hangs then get "Error Code: 1030. Got error 1 - 'Operation not permitted' from storage engine" error

I'm trying to build a recursive query to enable me to find all future sports match records for the two players of a given match. In addition to this I need the query to return any match for any player that plays in any descendant match. To illustrate using some example data:

match_id match_date p1_id p2_id
1 01/01/2022 1 2
2 02/01/2022 1 3
3 03/01/2022 3 4
4 04/01/2022 5 6

I only really need match_id so if the start match is match_id = 1 then I'm looking for the query to return 1. The query should also return 2 because p1_id = 1 played in the start match. The query should also return 3 because p2_id = 3 played in match_id = 2.

I've written the following query:

WITH RECURSIVE match_ids AS (
  SELECT
    rt1.match_id,
    rt1.p1_id,
    rt1.p2_id,
    rt1.match_date
  FROM recursive_test_so AS rt1
  WHERE rt1.match_id = 1
  UNION ALL
  SELECT
    rt2.match_id,
    rt2.p1_id,
    rt2.p2_id,
    rt2.match_date
  FROM recursive_test_so AS rt2
    JOIN match_ids ON 
      rt2.match_date > match_ids.match_date
  WHERE (
    rt2.p1_id IN (match_ids.p1_id, match_ids.p2_id)
    OR rt2.p2_id IN (match_ids.p1_id, match_ids.p2_id)
  )
)
SELECT DISTINCT match_id
FROM match_ids;

This works fine on the sample data.

However, when I scale the data up to 10k rows then the query runs for about 5 mins with no output and then I get the following error:

Error Code: 1030. Got error 1 - 'Operation not permitted' from storage engine

What might I be doing wrong?

SQL to replicate the sample data table:

CREATE TABLE `recursive_test_so` (
  `match_id` int NOT NULL,
  `match_date` date NOT NULL,
  `p1_id` int NOT NULL,
  `p2_id` int NOT NULL,
  PRIMARY KEY (`match_id`),
  KEY `match_date` (`match_date`),
  KEY `p1_id` (`p1_id`),
  KEY `p2_id` (`p2_id`),
  KEY `comp_all` (`match_date`,`p1_id`,`p2_id`),
  KEY `comp_player_ids` (`p1_id`,`p2_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3 COLLATE=utf8_unicode_ci;

INSERT INTO `recursive_test_so`
VALUES
  (1,'2022-01-01',1,2),
  (2,'2022-01-02',1,3),
  (3,'2022-01-03',3,4),
  (4,'2022-01-04',5,6);

Not sure how I could post the 10k rows data?



Python: 2 Conditions - Read in characters Until x, and Count Vowels

Specification: Read in characters until the user enters a full stop '.'. Show the number of lowercase vowels.

So far, I've succeeded in completing the read loop and printing out 'Vowel Count: '. However, vowel count always comes to 0. I've just started. I'm struggling with placement for 'Show the number of lowercase vowels' Should I define vowels = ... at the top? Or put it in a loop later? Do I create a new loop? I haven't been able to make it work. Thanks

c = str(input('Character: '))
count = 0

while c != '.':
        count += 1
        c = str(input('Character: '))

print("Vowel count =", count)


Javascript Stored Procedure Snowflake

  1. I am working on SP which will look for table name defined in ARRAY across all database
  2. create view by union on same table name
  3. For example for Table A if present in DB 1 and DB 2 then create view by selecting records from both the db
create or replace procedure PROC_1()
  returns VARCHAR -- return final create statement
  language javascript
  as     
$$
    //given two db for testing
    var get_databases_stmt = "SELECT DATABASE_NAME FROM SNOWFLAKE.INFORMATION_SCHEMA.DATABASES WHERE DATABASE_NAME='TERRA_DB' OR DATABASE_NAME='TERRA_DB_2'"
    var get_databases_stmt = snowflake.createStatement({sqlText:get_databases_stmt });
    var databases = get_databases_stmt.execute();
    var row_count = get_databases_stmt.getRowCount();
    var rows_iterated = 0;
    //table on which view will be created       
    var results_table=['STAGE_TABLE','JS_TEST_TABLE];
    var results_db=[];
    while (databases.next())  {
        var database_name = databases.getColumnValue(1);
        //rows_iterated += 1;
        for (let j = 0; j < results_table.length; j++){
            var stmt="CREATE OR REPLACE VIEW  TERRA_DB.TERRA_SCHEMA.ALL_"+results_table[j]+" AS \n";
            stmt += "SELECT * FROM "+database_name+".TERRA_SCHEMA." + results_table[j]
            if (rows_iterated < row_count){
                stmt += " UNION ALL";
            }
            ++rows_iterated;
        }
    }
    //var sql = snowflake.createStatement({sqlText:stmt});
    //var res =sql.execute();
    return stmt;
$$;
call PROC_1();  

Note:- code provided above is creating view by selecting data from one db only, ideally it should select from both db's. Any help will be appreciated!! I am new to JS SP



How to wait for the canvas fade in out to finish before saving the game?

this script make a canvas group alpha to change between 0 and 1 :

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using TMPro;

public class Description : MonoBehaviour
{
    public Canvas canvas;
    public AnimationCurve animationCurve;
    public float fadingSpeed = 5f;
    
    public TMP_InputField _inputField;

    public enum Direction { FadeIn, FadeOut };

    private CanvasGroup canvasGroup;

    void Start()
    {
        if (canvas == null) canvas = GetComponent<Canvas>();
        canvasGroup = canvas.GetComponent<CanvasGroup>();
        if (canvasGroup == null) Debug.LogError("Please assign a canvas group to the canvas!");

        if (animationCurve.length == 0)
        {
            Debug.Log("Animation curve not assigned: Create a default animation curve");
            animationCurve = AnimationCurve.EaseInOut(0f, 0f, 1f, 1f);
        }
    }

    public void StartFading(bool InOut)
    {
        if (canvasGroup != null)
        {
            if (InOut)
            {
                StartCoroutine(FadeCanvas(canvasGroup, Direction.FadeIn, fadingSpeed));
            }
            else
            {
                StartCoroutine(FadeCanvas(canvasGroup, Direction.FadeOut, fadingSpeed));
            }
        }
    }

    public IEnumerator FadeCanvas(CanvasGroup canvasGroup, Direction direction, float duration)
    {
        var startTime = Time.time;
        var endTime = Time.time + duration;
        var elapsedTime = 0f;

        if (direction == Direction.FadeIn) canvasGroup.alpha = animationCurve.Evaluate(0f);
        else canvasGroup.alpha = animationCurve.Evaluate(1f);

        while (Time.time <= endTime)
        {
            elapsedTime = Time.time - startTime;
            var percentage = 1 / (duration / elapsedTime);
            if ((direction == Direction.FadeOut)) // if we are fading out
            {
                canvasGroup.alpha = animationCurve.Evaluate(1f - percentage);
            }
            else
            {
                canvasGroup.alpha = animationCurve.Evaluate(percentage);
            }

            yield return new WaitForEndOfFrame();
        }

        if (direction == Direction.FadeIn) canvasGroup.alpha = animationCurve.Evaluate(1f);
        else canvasGroup.alpha = animationCurve.Evaluate(0f);

        _inputField.readOnly = false;
    }
}

and using it :

using UnityEngine;
using System.Collections;
using System.IO;

public class SavingGame : MonoBehaviour
{
    public int resWidth = 1920;
    public int resHeight = 1080;
    public SaveLoad saveLoad;
    public Description description;

    private static int countName;

    private void Start()
    {
        countName = 0;

        string[] dirs = Directory.GetDirectories(Application.persistentDataPath + "\\" + "Saved Screenshots",
            "*.*", SearchOption.TopDirectoryOnly);

        if(dirs.Length > 0)
        {
            countName = dirs.Length;
        }
    }

    public static string ScreenShotName(int width, int height)
    {
        return string.Format("{0}/Saved Screenshots/SaveSlot{1} SavedGameSlot_{2}x{3}_{4}/SavedGameSlot_{1}x{2}_{3}.png",
            Application.persistentDataPath,
            countName,
            width, height, System.DateTime.Now.ToString("yyyy-MM-dd_HH-mm-ss"));
    }

    void Update()
    {
        if (Input.GetKeyDown("k"))
        {
            description.StartFading(true);
        }
    }

    public void Save()
    {
        description.StartFading(false);

        string filename = ScreenShotName(resWidth, resHeight);
        string directory = Path.GetDirectoryName(filename);
        Directory.CreateDirectory(directory);
        ScreenCapture.CaptureScreenshot(filename);
        StartCoroutine(saveLoad.SaveWithTime(directory, Path.GetFileNameWithoutExtension(filename) + ".savegame.txt"));

        countName++;
    }
}

I'm calling the Save method through the editor ui button onclick event.

the problem is before saving i want first the fading out of the canvas to be finished and then making the rest of the saving code in the Save method :

public void Save()
        {
            description.StartFading(false);

i want that after finished the StartFading then to make the saving :

string filename = ScreenShotName(resWidth, resHeight);
            string directory = Path.GetDirectoryName(filename);
            Directory.CreateDirectory(directory);
            ScreenCapture.CaptureScreenshot(filename);
            StartCoroutine(saveLoad.SaveWithTime(directory, Path.GetFileNameWithoutExtension(filename) + ".savegame.txt"));
    
            countName++;

not sure how to do it. using a while maybe in the Save method ?



2022-08-28

How to limit CPU numbers in Docker Client API?

I have a script using docker python library or Docker Client API. I would like to limit each docker container to use only 10cpus (total 30cpus in the instance), but I couldn't find the solution to achieve that.

I know in docker, there is --cpus flag, but docker only has cpu_shares (int): CPU shares (relative weight) parameter to use. Does everyone have experience in setting the limit on cpu usage using docker?

import docker
client = docker.DockerClient(base_url='unix://var/run/docker.sock')
container = client.containers.run(my_docker_image, mem_limit=30G)

Edits:

I tried nano_cpus as what here suggests, like client.containers.run(my_docker_image, nano_cpus=10000000000) to set 10CPUS. When I inspectED the container, it did show "NanoCpus": 10000000000". However, if I run the R in the container and do parallel::detectCores(), it still shows 30, which I am confused. I also link R tag now.

Thank you!



Invoking if/then within find/replace routine

I have a macro to perform various tech editing tasks in technical documents. One task is to ensure large numbers have commas in the correct locations. My routine to insert commas works fine, but also includes dates, street #s, etc (e.g., 15 January 2,022 and 1,234 Smith Street). I am now attempting to correct the street addresses using the routine below, but am doing something wrong with my looping. Currently, it is only finding/fixing the first instance of a street number with a comma in it, then it stops looping.

Please note that the current code snippet below include several commented commands that I tried during my troubleshooting ...

What am I missing?

'remove commas from street addresses
Set oRange = ActiveDocument.Range
With oRange.Find
    'Set the search conditions
    .ClearFormatting
    .Text = "(<[0-9]{1,2})(,)([0-9]{3})"
    .Forward = True
    .Wrap = wdFindContinue
    .Format = False
    .MatchWildcards = True
    .Execute
    
    'If .Found Then
    Do While .Found
        oRange.Select 'for debugging purposes
        If (InStr(1, "NorthEastWestSouth", Trim(oRange.Words(3).Next(wdWord, 1)), 0) <> 0 And Len(Trim(oRange.Words(3).Next(wdWord, 1))) > 1) Or _
            (InStr(1, "StreetAvenueRoadRdBoulevardBlvdPikeCircleHighwayHwyCourtCtLaneWayParkwayAlleyBypassEsplanadeFreewayJunctionRouteRteTraceTrailTurnpikeVille", _
                Trim(oRange.Words(3).Next(wdWord, 2)), 0) <> 0 And Len(Trim(oRange.Words(3).Next(wdWord, 2))) > 1) Or _
            (InStr(1, "StreetAvenueRoadRdBoulevardBlvdPikeCircleHighwayHwyCourtCtLaneWayParkwayAlleyBypassEsplanadeFreewayJunctionRouteRteTraceTrailTurnpikeVille", _
                Trim(oRange.Words(3).Next(wdWord, 3)), 0) <> 0 And Len(Trim(oRange.Words(3).Next(wdWord, 3))) > 1) Or _
            InStr(1, "N.E.W.S.", Trim(oRange.Words(3).Next(wdWord, 1) & Trim(oRange.Words(3).Next(wdWord, 2))), 0) <> 0 Then
               .Replacement.Text = "\1\3"
               .Execute Replace:=wdReplaceAll
               'oRange.Text = VBA.Replace(oRange.Text, ",", "")
        End If
        '.Execute
    'End If
    Loop 'continue finding
End With


Hide web component until browser knows what to do with it

Similar to this question: How to prevent flickering with web components?

But different in that I can't just set the inner HTML to nothing until loaded because there is slotted content, and I don't wish to block rendering the page while it executes the web component JS.

I thought I could add CSS to hide the element, and then the init of the webcomponent unhides itself, but then that CSS snippet needs to included where ever the web component is used, which is not very modular, and prone to be forgotten

I am working on modal component, here's the code (although I don't think its particularly relevant:

<div id="BLUR" part="blur" class="display-none">
    <div id="DIALOGUE" part="dialogue">
        <div id="CLOSE" part="close">
            X
        </div>
        <slot></slot>
    </div>
</div>
const name = "wc-modal";
const template = document.getElementById("TEMPLATE_" + name);

class Component extends HTMLElement {
    static get observedAttributes() { return ["open"]; } // prettier-ignore

    constructor() {
        super();
        this.attachShadow({ mode: "open" });
        this.shadowRoot.appendChild(template.content.cloneNode(true));
    }
    connectedCallback() {
        if (this.initialised) return; // Prevent initialising twice is item is moved
        this.setupEventListners();
        this.init();
        this._upgradeProperty("open");
        this.initialised = true;
    }
    init() {}
    get(id) {
        return this.shadowRoot.getElementById(id);
    }

    _upgradeProperty(prop) {
        /*
        Setting a property before the component has loaded will result in the setter being overriden by the value. Delete the property and reinstate the setter.
        https://developers.google.com/web/fundamentals/web-components/best-practices#lazy-properties
        */
        if (this.hasOwnProperty(prop)) {
            let value = this[prop];
            delete this[prop];
            this[prop] = value;
        }
    }

    // Setup Event Listeners ___________________________________________________
    setupEventListners() {
        this.get("CLOSE").addEventListener("click", () => this.removeAttribute("open"));
        this.get("BLUR").addEventListener("click", () => this.removeAttribute("open"));
        // If the dialogue does not handle click, it propagates up to the blur, and closes the modal
        this.get("DIALOGUE").addEventListener("click", (event) => event.stopPropagation());
    }

    // Attributes _____________________________________________________________
    attributeChangedCallback(name, oldValue, newValue) {
        switch (name) {
            case "open":
                // Disabled is blank string for true, null for false
                if (newValue === null) this.hideModal();
                else this.showModal();
        }
    }

    // Property Getters/Setters _______________________________________________
    get open() { return this.hasAttribute("open"); } // prettier-ignore
    set open(value) { value ? this.setAttribute("open", "") : this.removeAttribute("open"); } // prettier-ignore

    // Utils & Handlers _______________________________________________________
    showModal() {
        this.get("BLUR").classList.remove("display-none");
        // Disable scrolling of the background
        document.body.style.overflow = "hidden";
    }
    hideModal() {
        this.get("BLUR").classList.add("display-none");
        // Renable scrolling of the background
        document.body.style.overflow = "unset";
    }
}

window.customElements.define(name, Component);


Error while starting Log Stash Expected one of [ \\t\\r\\n]

Connecting LogStash to SQL Server. Could help me with following error while starting logstash?

I executed this command:

logstash.bat -f c:\DevSoft\logstash-8.3.3\bin\logstash-sample.conf

I get following error: I tried removing all whitespaces from .conf file but with no luck.

[ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, > :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "if", [A-Za-z0-9_-], '"', "'", "}" at line 1, column 8 (byte 8) after input {",

Here is logstash-sample.conf located in bin folder itself where logstash.bat is:

enter image description here

input {​​​​​​​
  jdbc {​​​​​​​
    # jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/mssql-jdbc-7.3.1.jre8-preview.jar"
    jdbc_driver_library => "C:\DevSoft\sqljdbc_11.2\enu\mssql-jdbc-11.2.0.jre11.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://elastic-dev-01.database.windows.net:1433;database=logstashsample;user=**;password=**;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
    jdbc_user => "sa"
    jdbc_password => "***"
    schedule => "* " #--works every one minute. This works like crontab.
    statement => "select * from Products"
    clean_run=>true
    }​​​​​​​
  }
  ​​​​​​​
output {​​​​​​​
  elasticsearch {​​​​​​​
      hosts => "localhost:9200"
      index => "products_index"
    }​​​​​​​
    stdout {
      codec => rubydebug​​​​​​​
    }​​​​​​​
  }​​


Discord Bot Asynchronous Loop Failure

Please be patient with me. I'm actually not a coder and working with code that was left to us by someone who we can no longer contact and get help with. I've tried to do some research on my own, but I unfortunately can't extrapolate from other solutions to ours due to my insufficient skills.

Part of the issue is also that our code is probably less than optimal and could be looped better by assigning i-values to each task, but I unfortunately don't know how to change it, so I'm just trying to find a solution with what we have. It was working on Heroku, but with their upcoming removal of their free services, we're looking to import it elsewhere and running into errors.

Briefly, we have a very simple Discord bot whose purpose is to check certain channels and send a message when those channels have not had activity within a certain time period. In more detail, here is the general code for checking one of the channels:

import discord
import datetime
import asyncio
import math
from discord.ext import tasks

client = discord.Client(intents=discord.Intents.default())

@client.event
async def on_ready():
    print('Ready'.format(client))

@tasks.loop(seconds = 30)
async def channelname():
    await client.wait_until_ready()
    channel_id = ################
    
    channel = client.get_channel(channel_id)
    message = await channel.fetch_message(channel.last_message_id)
    printchannel = client.get_channel(################)
    
    if "message to turn off bot" in message.content.lower():
        return
    else:
        msg_secs = (datetime.datetime.utcnow() - message.created_at).total_seconds()
        if msg_secs >=300  and msg_secs <= 360:
            await printchannel.send('reminder message')
        else:
            return

channelname.start()

client.run('bot token')

When we try to run this code, we are currently running into this error (on Railway):

Traceback (most recent call last):
File "botname.py", line 5610, in <module>
test.start()
File "/opt/venv/lib/python3.8/site-packages/discord/ext/tasks/__init__.py", line 398, in start
self._task = asyncio.create_task(self._loop(*args, **kwargs))
File "/nix/store/bhny2arkxrifw0afjbnqqi0ilqnwndqr-setup-env/lib/python3.8/asyncio/tasks.py", line 381, in create_task
loop = events.get_running_loop()
RuntimeError: no running event loop
sys:1: RuntimeWarning: coroutine 'Loop._loop' was never awaited

From what I can tell from looking up this error, the problem is that the loop was just created and cannot have tasks attached to it, so I need to have the code add the tasks in by itself. Unfortunately, all of the solutions I'm finding do use i-values, which we don't use, so I can't figure out how to make it work for us.

Thank you very much in advance for any assistance!



2022-08-27

How to get all key and values from nested JSON in java

Hi I need to read all key, values from nested JSON, where ever there is inner JSON. I need that values ignoring the key.From below JSON i need Key values for nested JSON, like: responseStatus-passed, "statusCode":"200", "retrieveQuoteResponse":null,"quoteGuid":null, etc.ignoring the start key value like: responsePreamble, quoteProductList which has a nested json inside it.

{
    "responsePreamble": {
        "responseStatus": "Passed",
        "statusCode": "200",
        "responseMessage": "Records Found"
    },
    "retrieveQuoteResponse": null,
    "totalQuoteProductCount": 2,
    "quoteProductList": {
        "quoteGuid": null,
        "quantity": 180
}

Code:

ObjectReader reader = new ObjectMapper().readerFor(Map.class); 
Map<String, Map<String, String>> employeeMap = reader.readValue(jsonObject); 
for (Entry<String, Map<String, String>> empMap : employeeMap.entrySet()) { 
    Map<String, String> addMap = empMap.getValue(); 
    if(addMap!=null) { 
        for (Entry<String, String> addressSet : addMap.entrySet()) {
            System.out.println(addressSet.getKey() + " :: " + addressSet.getValue()); 
        } 
    } 
}

OutPut:

responseStatus :: Passed
statusCode :: 200
responseMessage :: Records Found
Exception in thread "main" java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
    at com.im.api.tests.CompareTwoJsons.main(CompareTwoJsons.java:78)


How to loop and index through file content in python and assign each line for different variable

If I have a file.txt And the content looks like this:

   
BEGAN_SIT
s_alis='HTTP_WSD'
xps_entity='HTTP_S_ER'
xlogin_mod='http'
xdest_addr='sft.ftr.net'
xmax_num='99'
xps_pass='pass'
xparam_nm='htp'
#?SITE END

How i I can loop through it and assign each line for different variable



Is there a way to manage col-sm children with justify-content-end parent?

I am trying to set 3 divs to the right of the screen, stacked horizontally, just to be stacked vertically on small screen. justify-content-end works perfectly on parent div until I use col-sm in children, then I lose the justification. Why would col-sm dismiss the use of justification? How can I solve this?

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.6.2/dist/css/bootstrap.min.css" integrity="sha384-xOolHFLEh07PJGoPkLv1IbcEPTNtaed2xpHsD9ESMhqIYd0nLMwNLD69Npy4HI+N" crossorigin="anonymous">

<div class="d-flex justify-content-end">
  <div class="order-1 p-2">Some action 1</div>
  <div class="order-2 p-2">Another action 2</div>
  <div class="order-3 p-2">Triple divs 3</div>
</div>

The code above works and justifies perfectly, but does not set items vertically stacked on small screens. The code below must do it but it just won't!

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.6.2/dist/css/bootstrap.min.css" integrity="sha384-xOolHFLEh07PJGoPkLv1IbcEPTNtaed2xpHsD9ESMhqIYd0nLMwNLD69Npy4HI+N" crossorigin="anonymous">

<div class="d-flex justify-content-end">
  <div class="order-1 p-2 col-sm">Some action 1</div>
  <div class="order-2 p-2 col-sm">Another action 2</div>
  <div class="order-3 p-2 col-sm">Triple divs 3</div>
</div>


How to pass argument to onclick callback - yew function component

How do I correctly pass an argument to an onclick event handler in a yew function component?

What I have:

#[function_component(Calculator)]
pub fn calulator() -> Html {
    let navigator = use_navigator().unwrap();
    let handle_formula_click = Callback::from(move |_| {
        navigator.push(&AppRoute::Formula { id })
    });

    html! {
            <div>
                ...
                <button onclick={handle_formula_click}>
                    ...
                </button>
                ...
            </div>
    }
}

I would like to pass in a string to the handle_formula_click callback

What I want:

#[function_component(Calculator)]
pub fn calulator() -> Html {
    let navigator = use_navigator().unwrap();
    let handle_formula_click = Callback::from(move |id: String| {
        navigator.push(&AppRoute::Formula { id })
    });

    html! {
            <div>
                ...
                <button onclick={handle_formula_click("fixed1"}>
                    ...
                </button>
                ...
            </div>
    }
}


Use REGEXEXTRACT() to extract only upper case letters from a sentence in Google sheets

I'm thinking this should be basic but having tried a number of things I'm nowhere nearer a solution:

I have a list of names and want to extract the initials of the names in the next column:

Name (have) Initials (want)
John Wayne JW
Cindy Crawford CC
Björn Borg BB
Alexandria Ocasio-Cortez AOC
Björk B
Mesut Özil MÖ

Note that some of these have non-English letters and they may also include hyphens. Using REGEXMATCH() I've been able to extract the first initial but that's where it stops working for me. For example this should work according to regex101:

=REGEXEXTRACT(AH2, "\b[A-Z]+(?:\s+[A-Z]+)*") but only yields the first letter.



2022-08-26

Word VBA copy text formatted text in a certain font to a file and other formatting in other file

From a comparison docx file I need to extract into two word files the text formatted as strikethrough in one docx file and the text formatted as double underline in another docx file to be able to perform the wordcount of newly inserted and deleted text separately. To do this, I wrote this macro, that actually activates the correct files, but only copies and pastes the formatting resulting from the first search.

    Sub WSC_extraction_for_wordcount()
    'This macro extracts double underlined text to the file "target_ins"
    'This macro extracts strikethrough text to the file "target_del"

    Application.ScreenUpdating = False
    Selection.HomeKey Unit:=wdStory
    Selection.Find.ClearFormatting

    'STRIKETHROUGH processing
    Do
    With Selection.Find.Font
    .StrikeThrough = True 'Then
    Selection.Find.Execute FindText:="", Forward:=True, Format:=True
    Selection.Cut
    Windows("target_del.docx").Activate
    Selection.PasteAndFormat (wdPasteDefault)
    Selection.TypeParagraph
    Windows("source.docx").Activate
    End With

    'DOUBLE UNDERLINE processing
    With Selection.Find.Font
    .Underline = wdUnderlineDouble = True 'Then
    Selection.Find.Execute FindText:="", Forward:=True, Wrap:=wdFindContinue, Format:=True
    Selection.Cut
    Windows("target_ins.docx").Activate
    Selection.PasteAndFormat (wdPasteDefault)
    Selection.TypeParagraph
    Windows("source.docx").Activate
    End With
    Loop
    End Sub

I would be grateful if someone could help me in transforming the options into something like: if the next sentence you encounter is formatted as strikethrough, copy it to file target_del, if the next sentence you encounter is formatted as double underlined, copy it to the file target_ins.

Thank you in advance!



Why do I get AttributeError: type object 'Placeholder' has no attribute 'loads', when running pyisntaller?

I am using Python 3.10.6, pip 22.2.2 on Windows 11

I have a program which uses yfinance to grab stock data and sklearn.svr to predict stock data. I want to turn this program into a .exe file using pyisntaller. Pyinstaller finished and the .exe file is created but when I want to run it i get:

 File "PyInstaller\loader\pyimod02_importers.py", line 493, in exec_module
  File "requests_cache\__init__.py", line 7, in <module>
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "PyInstaller\loader\pyimod02_importers.py", line 493, in exec_module
  File "requests_cache\backends\__init__.py", line 7, in <module>
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "PyInstaller\loader\pyimod02_importers.py", line 493, in exec_module
  File "requests_cache\backends\base.py", line 18, in <module>
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "PyInstaller\loader\pyimod02_importers.py", line 493, in exec_module
  File "requests_cache\serializers\__init__.py", line 6, in <module>
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "PyInstaller\loader\pyimod02_importers.py", line 493, in exec_module
  File "requests_cache\serializers\preconf.py", line 122, in <module>
  File "requests_cache\serializers\pipeline.py", line 44, in __init__
  File "requests_cache\serializers\pipeline.py", line 44, in <listcomp>
AttributeError: type object 'Placeholder' has no attribute 'loads'

And when the pyisntaller is building the .exe file I get:

587 WARNING: Failed to collect submodules for 'pkg_resources._vendor.pyparsing.diagram' because importing 'pkg_resources._vendor.pyparsing.diagram' raised: AttributeError: module 'railroad' has no attribute 'DiagramItem'

So I think that the problem is because of matplotlib? I use it at the end to plot the predicted price.

The imports I am using on my program are:

import yfinance as yf
import requests_cache
import numpy as np
from sklearn.svm import SVR 
import matplotlib.pyplot as plt
import datetime as dt 

I ran the program by moving into the right directory and than running:

pyinstaller --onefile -w stockPredictor.py



How to calculate days between two dates from different sheets where the emails from both sheets in row match?

I have 2 sheets in same workbook that have dates that I need to calculate the number of days between the two dates. There is a common identifier in both sheets rows being email. If there is 0 days between dates it should state 0 and a date is missing it should showing blank ""

https://docs.google.com/spreadsheets/d/1tigqy4hKFn0Q7c-3ICyI6WREnsIBFZ2oLO8CEcdxe8w/edit?usp=sharing

results: Sheet1!J:2 start date Sheet1!D:D End date: Day_count!B:B Matching identifier = email in Col1 on both sheets

What would be the best way to work this out without using a helper column

Answer This works when the lookup isn't in Col1

={"Day count"; ARRAYFORMULA(IFNA(DAYS(VLOOKUP(A2:A, {Days_count!B2:B, Days_count!C2:C}, 2, 0), D2:D)))}


Where should RBAC be implemented?

To give you some background, I have frequently worked with RBAC implemented on the SQL level, but I read in some articles that it might not be very scalable.

Should RBAC be implemented on, say:

  • On the Database level (i.e. row or column based access control)
  • On the Application level (i.e. logic in the code) perhaps with some document storage support
  • On some other level

What are the pros and cons of each approach in terms of scalability and what is the gold industry standard?



Track how much time the client (Angular) call is taking to hit the API controller

I want to create a performance tool to track the time taking to the reach the call to API controller and then to the different layers of application and DB.

When I am using the UTC datetime, faced an issue. The server UTC time is 5 seconds less than client UTC time (both client and server are in same time zone)

eg : Request sent at 07:10:05 AM and it reached at 07:10:01.

So, if the server time is not correct, taking UTC time also will give wrong time.right?

Is there any other ideas to implement this requirement?



How can I store one to many in PostgreSQL via array of alt keys?

I have two classes - Role and PosUser.

public class Role : IEntity
{
    public string Name { get; set; }
    [Column(TypeName = "jsonb")]
    public string[] Permissions { get; set; }
    public bool IsProtected { get; set; }
    public uint Priority { get; set; }
    
    #region IEntity
    #endregion
}
public class PosUser : IEntity
{
    public string Name { get; set; }
    public List<Role> Roles { get; set; }

    #region IEntity
    #endregion
}

I want to have two tables on each of these entitites. Roles should not know anything about Users, but every User should store jsonb array of role's names like ["Admin", "Test"]

I tried to use:

protected override void OnModelCreating(ModelBuilder builder)
    {
        builder.Entity<Role>().HasAlternateKey(x => x.Name);

        builder.Entity<PosUser>().Property(u => u.Roles)
            .HasPostgresArrayConversion(r => r.Name, name => Find<Role>(name));
        base.OnModelCreating(builder);
    }

But I got error about context disposed.

These doesn't fit:

  • Store links by ForeignKeys in new table
  • Store all links to users at Role table


2022-08-25

JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata" What is the solution?

enter image description here#hibernate properties

spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MYSQLInnoDBDialect
spring.jpa.hibernate.ddl-auto=update
logging.level.org.hibernate.SQL=DEBUG


Jfrog plan available in Azure marketplace with supported NPM package

Which Jfrog plan available in Azure marketplace with supported NPM package including low cost please suggested.



Jfrog Artifactory remote repository with certificates

Server: Ubuntu 20.04
Jfrog Artifactory: 7.39.10

we have a lot off remote repos with certificate auths --> redhat.
Always if we automatically reboot the artifactory server the
remote repos have problems to connect to redhat.

But very curious if i go to the config menu from one remote
repo and do any little change (no matter what) and immediately
all remote repos can connect to redhat again.

Has anyone an idea how it can be done?



antMatcher not working with antMatchers security config

I am working on Spring Boot security config where I want one of the URL to be excluded from security filter.

URL format: URL/v1/btob/**.
To be excluded URL format: URL/v1/btob/icici/pay

Here's my configure method:

@Override
public void configure(HttpSecurity http) throws Exception {
     http
         .csrf().disable();
     http
         .sessionManagement()
             .sessionCreationPolicy(SessionCreationPolicy.STATELESS);

     http
         .antMatcher("/v1/btob/**")
         .httpBasic()
             .and()
         .csrf().disable()
         .sessionManagement()#
             .sessionCreationPolicy(SessionCreationPolicy.STATELESS)
             .and()
         .cors()
             .and()
         .authorizeRequests()
             .antMatchers(HttpMethod.POST, "/icici/pay").permitAll()
             .anyRequest().authenticated()
             .and()
         .addFilterBefore(btoBFilter, UsernamePasswordAuthenticationFilter.class);
}

@Override
public void configure(WebSecurity web) {

    web
       .ignoring()
           .antMatchers(HttpMethod.POST, "/v1/btob/icici/pay");
}

I did this but still the excluded URL goes in the filter. How to fix this? I even tried ignoring the URL globally in 2nd configure method but no help.



R improve loop efficiency: Operating on columns that correspond to rows in a second dataframe

I have two data frames:

dat <- data.frame(Digits_Lower = 1:5,
                  Digits_Upper = 6:10,
                  random = 20:24)
dat
#>   Digits_Lower Digits_Upper random
#> 1            1            6     20
#> 2            2            7     21
#> 3            3            8     22
#> 4            4            9     23
#> 5            5           10     24

cb <- data.frame(Digits = c("Digits_Lower", "Digits_Upper"),
                 x = 1:2, 
                 y = 3:4)
cb                 
#>         Digits x y
#> 1 Digits_Lower 1 3
#> 2 Digits_Upper 2 4

I am trying to perform some operation on multiple columns in dat similar to these examples: In data.table: iterating over the rows of another data.table and R multiply columns by values in second dataframe. However, I am hoping to operate on these columns with an extended expression for every value in its corresponding row in cb. The solution should be applicable for a large dataset. I have created this for-loop so far.

dat.loop <- dat
for(i in seq_len(nrow(cb)))
{
#create new columns from the Digits column of `cb`
  dat.loop[paste0("disp", sep = '.', cb$Digits[i])] <- 
    #some operation using every value in a column in `dat` with its corresponding row in `cb` 
    (dat.loop[, cb$Digits[i]]- cb$y[i]) * cb$x[i]
}
dat.loop 
#>   Digits_Lower Digits_Upper random disp.Digits_Lower disp.Digits_Upper
#> 1            1            6     20                -2                 4
#> 2            2            7     21                -1                 6
#> 3            3            8     22                 0                 8
#> 4            4            9     23                 1                10
#> 5            5           10     24                 2                12

I will then perform operations on the data that I appended to dat in dat.loop applying a similar for-loop, and then perform yet another operation on those values. My dataset is very large, and I imagine my use of for-loops will become cumbersome. I am wondering:

  1. Would another method improve efficiency such as using data.table or tidyverse?

  2. How would I go about using another method, or improving my for-loop? My main confusion is how to write concise code to perform operations on columns in dat with corresponding rows in cb. Ideally, I would split my for-loop into multiple functions that would for example, avoid indexing into cb for the same values over and over again or appending unnecessary data to my dataframe, but I'm not really sure how to do this.

Any help is appreciated!

EDIT:

I've modified the code @Desmond provided allowing for more generic code since dat and cb will be from user-inputted files, and dat can have a varying number of columns/ column names that I will be operating on (columns in dat will always start with "Digits_" and will be specified in the "Digits" column of cb.

library(tidyverse)

results2 <- dat %>% 
  crossing(cb) %>% 
  rowwise() %>%
  mutate(disp = (get(`Digits`)-y) *x ) %>%
  ungroup() %>% 
  pivot_wider(names_from = Digits,
              values_from = disp,
              names_prefix = "disp_")

results3 <- results2 %>% 
  group_by(random) %>% 
  fill(starts_with("disp"), .direction = c("downup")) %>% 
  ungroup() %>% 
  select(-c(x,y)) %>% 
  unique()
              
results3
#>   Digits_Lower Digits_Upper random disp_Digits_Lower disp_Digits_Upper
#> 1            1            6     20                -2                 4
#> 2            2            7     21                -1                 6
#> 3            3            8     22                 0                 8
#> 4            4            9     23                 1                10
#> 5            5           10     24                 2                12


infinite loop with react/redux

I have tiredlessly tried everything i can find on stack for this issue and am getting no where. We are using react/typescript. redux, and saga. I have a list of categories to bring back for nav list and using useEffect to dispatch the action to redux store. our tsx.file:

  const dispatch = useDispatch();
  const categories = useSelector((state) => state?.categories?.payload);
  const loadCategories = () => {
    dispatch(getCategories(categories));
  };

  useEffect(() => {
   loadCategories();
  }, []);
 
    {categories?.map((x, index) => (
      <Link href={"/store/" + `${x.name}` + "/s"}>
        <a
          type="button"
          id={`${x.name}`}
          title={`${x.name}`}
          className={"xl:px-3 px-2 py-[1.15rem] font-normal"}>
          {x.name}
        </a>
      </Link>
    ))}

Network traffic just shows hundreds of requests going out to the category endpoint -- stumped!

still stuck so adding our redux/saga files actions:

import {GET_CATEGORIES} from './actionTypes'

export const getCategories = (categories: any) => {
    return {
        type: GET_CATEGORIES,
        payload: categories,
    }
}

reducer:

import {GET_CATEGORIES} from './actionTypes'

const reducer = (state = [], action) => {
    switch (action.type) {
        case GET_CATEGORIES:
            state = {
                ...state,
                payload: action.payload,
            }
            break
        default:
            state = {...state}
            break
    }
    return state
}
export default reducer

saga:

let categoriesApiService = container.resolve(CategoriesApiService)

const categoryApi = async () => {
    return firstValueFrom(
        categoriesApiService.GetCategoryTree({
            path: {version: '1'},
            query: {},
        })
    )
}

function* getCategoriesTree() {
    try {
        let categoryTreeDTO: CategoryTreeDTO = yield call(categoryApi)
        yield put(getCategories(categoryTreeDTO))
    } catch (error: any) {
        yield put(apiError(error?.response?.data?.message))
    }
}

export function* watchGetCategories() {
    yield takeEvery(GET_CATEGORIES, getCategoriesTree)
}

function* categorySaga() {
    yield all([fork(watchGetCategories)])
}

export default categorySaga


2022-08-24

SVM problem - name 'model_SVC' is not defined

I have a problem with this code:

    from sklearn import svm
    model_SVC = SVC()
    model_SVC.fit(X_scaled_df_train, y_train)
    svm_prediction = model_SVC.predict(X_scaled_df_test)

The error message is

NameError
Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_14392/1339209891.py in ----> 1 svm_prediction = model_SVC.predict(X_scaled_df_test)

NameError: name 'model_SVC' is not defined

Any ideas?



Error response from daemon: manifest for abhishek8054/token-app:latest not found: manifest unknown: manifest unknown

I had made my own Docker image that is a simple react app and pushed it onto the docker hub. Now I am trying to pull my image in system then it shows me an error

Error response from daemon: manifest for abhishek8054/token-app:latest not found: manifest unknown: manifest unknown".

I am doing something wrong.

My Dockerfile code is:

FROM node:16-alpine
WORKDIR /app/
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm","start"]

And I made the image from the following command:

docker image build -t abhishek8054/token-app:latest .

And pushed my image with the following command:

docker push abhishek8054/token-app:latest

And pulled it again with the following command:

docker pull abhishek/8054/token-app

And it gives me the error above.



merging df in pandas based on "contains" values

I have 2 dfs

df_1

Nº.do Incidente Status  Description Per_Extracao
0   IN6948271   ENCERRADO   GR26 D.I.T.I. >>> ABEND NO JOB PP_SASG_GD9822...    DE : 2022/01/05 ATÉ : 2022/12/08
1   IN6948304   ENCERRADO   GR26 D.I.T.I. >>> ABEND NO JOB PP_AACE_R4539 ...    DE : 2022/01/05 ATÉ : 2022/12/08
2   IN6948307   ENCERRADO   GR26 D.I.T.I. >>> ABEND NO JOB PP_ADAT_SPRK_EX...   DE : 2022/01/05 ATÉ : 2022/12/08
3   IN6948309   ENCERRADO   GR26 D.I.T.I. >>> ABEND NO JOB PP_ADAT_SPRK_EX...   DE : 2022/01/05 ATÉ : 2022/12/08
4   IN6948310   ENCERRADO   GR26 D.I.T.I. >>> ABEND NO JOB PP_ADAT_SPRK_EX...   DE : 2022/01/05 ATÉ : 2022/12/08
5   IN6948311   ENCERRADO   GR26 D.I.T.I. >>> ABEND NO JOB PP_ADAT_SPRK_EX...   DE : 2022/01/05 ATÉ : 2022/12/08

df_2

    JOB_NAME    JOB_STREAM_NAME
0   PP_AACD_NR_D8706_TIHIBRIDA_PROC_EXCUC_D P26_AACD_FAC_TOD
1   PP_SASG_GD9822  P26_AACE_U08
2   PP_AACE_R4539   P26_AACE_U09
3   PP_AACE_R4539_CONS_JUNC P26_AACE_U08
4   PP_AACE_R4539_FMRC_TD_01    P26_AACE_U08
5   PP_AACE_R4539_FMRC_TD_02    P26_AACE_U08

I'm trying to merge then based on the value of JOB_NAME in df_2

the output should be something like this

merged_df

Nº.do Incidente Status  Description Per_Extracao JOB_NAME    JOB_STREAM_NAME
0   IN6948271   ENCERRADO   GR26 D.I.T.I. >>> ABEND NO JOB PP_SASG_GD9822...    DE : 2022/01/05 ATÉ : 2022/12/08 PP_SASG_GD9822  P26_AACE_U08
1   IN6948304   ENCERRADO   GR26 D.I.T.I. >>> ABEND NO JOB PP_AACE_R4539 ...    DE : 2022/01/05 ATÉ : 2022/12/08 PP_AACE_R4539   P26_AACE_U09

its not a regular join, its a contains contains condition("JOB_NAME" value in df_2 founded in "Description" column of df_1).

could you guys help me please?



tf.cast not changing the dtype ORIGINAL ISSUE:tensorflowjs Error: Argument 'x' passed to 'conv2d' must be float32 tensor, but got int32 tensor

I'm trying to load a model I develop in tensorflow (Python) with tensorflowjs and make prediction for an input test, as follow:

tf_model = await tf.loadGraphModel('http://localhost:8080/tf_models/models_js/model/model.json')
let test_output = await tf_model.predict(tf.tensor2d([0.0, -1.0, 1.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [1, 9], 'float32'))
console.log("[Test tf model]:", test_output.arraySync())

I'm getting this error in the js console at tf_model.predict

Error: Argument 'x' passed to 'conv2d' must be float32 tensor, but got int32 tensor

even if the input of the Conv2D Layer is of type float32 in the model definition


inputs = tf.keras.layers.Input((9))

# One-Hot encoding
x = tf.cast(tf.one_hot(tf.cast(inputs + 1, tf.int32), 3), tf.float32)

x = tf.reshape(x, (-1, 3, 3, 3))
x = tf.keras.layers.Conv2D(
        filters=3**5, kernel_size=(3, 3), kernel_regularizer=kernel_regularizer
    )(x)

Anybody knows why this could happen?

EDIT: It seems tf.cast does not change the type, if I run

print(tf.shape(inputs))
x = tf.cast(tf.one_hot(tf.cast(inputs + 1, tf.int32), 3), tf.float32)
print(tf.shape(x)

I keep getting tf.int32

KerasTensor(type_spec=TensorSpec(shape=(2,), dtype=tf.int32, name=None), inferred_value=[None, 9], name='tf.compat.v1.shape_12/Shape:0', description="created by layer 'tf.compat.v1.shape_12'")
KerasTensor(type_spec=TensorSpec(shape=(3,), dtype=tf.int32, name=None), inferred_value=[None, 9, 3], name='tf.compat.v1.shape_13/Shape:0', description="created by layer 'tf.compat.v1.shape_13'")

???



WPML - SQL query to get post by current language

I'm using WPML and need a custom search query (The default search of wordpress is not working in my case). Let say my query is: "SELECT * FROM wp_posts WHERE post_title LIKE %s% AND [WPML_CURRENT_LANGUAGE_CONDITION]";

Please help me check which table I should care for this.

Thanks!



2022-08-23

TF WideDeepModel - Shape Error when Passing Different Features for Wide and Deep Models

I am attempting to recreate the Wide and Deep model using Tensorflow's WideDeepModel library; however, I am encountering an issue when attempting to differentiate between the wide model inputs and the deep model inputs. Referenced below is the code that I am using.

# Create LinearModel and DNN Model as in Examples 1 and 2
  optimizer = tf.keras.optimizers.Ftrl(
          l1_regularization_strength=0.001,
          learning_rate=tf.keras.optimizers.schedules.ExponentialDecay(
              initial_learning_rate=0.1, decay_steps=10000, decay_rate=0.9))

  linear_model = tf.compat.v1.keras.experimental.LinearModel()
  linear_model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
  linear_model.fit(X_train[wideInputs], y_train, epochs=50)

  dnn_model = tf.keras.models.Sequential([
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(1)
  ])
  dnn_model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])

  combined_model = tf.compat.v1.keras.experimental.WideDeepModel(linear_model,
                                                                dnn_model)
  combined_model.compile(
      optimizer=[optimizer, optimizer], loss='mse', metrics=['accuracy'])
  combined_model.fit([X_train[wideInputs], X_train[wideInputs] + X_train[dnnInputs]], y_train, epochs=50)
  print('Accuracy', combined_model.evaluate(X_test, y_test, return_dict=True))

My goal is to fit the combined_model variable with the linear inputs only in the first model (wide) and the deep inputs (both wide and deep inputs); however, I encounter an error with mismatching shapes between the two inputs. I assume that the number of rows need to remain the same but the features can vary between the two models as we are defining two separate set of features that should be used. However, when I use a different set of features, I am returned with the following error:

ValueError: Exception encountered when calling layer "linear_model" (type LinearModel).

    Input 0 of layer "dense" is incompatible with the layer: expected axis -1 of input shape to have value 3004, but received input with shape (None, 3009)

    Call arguments received by layer "linear_model" (type LinearModel):
      • inputs=tf.Tensor(shape=(None, 3009), dtype=float32)

Any feedback would be greatly appreciated.

Just to note, when I do not differentiate the features (fit using combined_model.fit([X_train, X_train], y_train, epochs=50)), the model runs; however, it is not using the expected wide and deep inputs this way.

I also referenced the following pages to work with the code.

  1. https://www.tensorflow.org/guide/migrate/canned_estimators#tf2_using_keras_widedeepmodel
  2. https://www.tensorflow.org/api_docs/python/tf/keras/experimental/WideDeepModel

Additionally, the dataset that I am passing in looks like the following: https://i.stack.imgur.com/h1ae1.png



Error in last 3 months calculation in PowerBi

calculating the last 3months or average was working in one PBIX file. Now when I try to use it in another it comes up empty. Naming seems to be right and the Calendar is a copy from the working PBIX.

What can have gone wrong between the two files, is there something obvious you can see?

Is there something I should check and possibly change?

SLS L3m (USD) = 

var months = 3 var sum_period = CALCULATE( [Sales (USD)], DATESINPERIOD('Calendar'[Date] , FIRSTDATE('Calendar'[Date])+1 , -months, MONTH ) ) return IF( NOT(ISBLANK([Sales (USD)])), sum_period)

INV avg L12m (USD) = 

var months = 12 var sum_period = CALCULATE( [Inventory (USD)], DATESINPERIOD('Calendar'[Date] , LASTDATE('Calendar'[Date]) , -months, MONTH ) ) return IF( NOT( ISBLANK( [Inventory (USD)] )), sum_period/months )

enter image description here



java.lang.NoSuchFieldError: Companion when using `influx-client-reactive` and `quarkus`

Error occurs when instantiating a client

    InfluxDBClientReactive influxDBClient = InfluxDBClientReactiveFactory.create(
            influxConf.url(),
            influxConf.username(),
            influxConf.password().toCharArray());

dependency is excluded from quarkus-bom

implementation enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}") {
    exclude group: "com.squareup.okhttp3", module: "okhttp"
}
implementation "com.influxdb:influxdb-client-reactive:6.4.0" 

otherwise (3.x.x) is forced and would cause

'okhttp3.RequestBody okhttp3.RequestBody.create(java.lang.String, okhttp3.MediaType)'
java.lang.NoSuchMethodError: 'okhttp3.RequestBody okhttp3.RequestBody.create(java.lang.String, okhttp3.MediaType)

at the same line.

trace:

Companion
java.lang.NoSuchFieldError: Companion
    at okhttp3.internal.Util.<clinit>(Util.kt:70)
    at okhttp3.HttpUrl$Builder.parse$okhttp(HttpUrl.kt:1239)
    at okhttp3.HttpUrl$Companion.get(HttpUrl.kt:1634)
    at okhttp3.HttpUrl$Companion.parse(HttpUrl.kt:1643)
    at okhttp3.HttpUrl.parse(HttpUrl.kt)
    at com.influxdb.client.InfluxDBClientOptions$Builder$ParsedUrl.<init>(InfluxDBClientOptions.java:689)
    at com.influxdb.client.InfluxDBClientOptions$Builder$ParsedUrl.<init>(InfluxDBClientOptions.java:681)
    at com.influxdb.client.InfluxDBClientOptions$Builder.connectionString(InfluxDBClientOptions.java:504)
    at com.influxdb.client.InfluxDBClientOptions$Builder.url(InfluxDBClientOptions.java:288)
    at com.influxdb.client.reactive.InfluxDBClientReactiveFactory.create(InfluxDBClientReactiveFactory.java:105)


How do I map a Comparator

I have a comparator of type Comparator<Integer> and a function Function<Pair<Integer,?>,Integer> expressed as Pair::left (that returns an Integer).

I need to obtain a comparator of type Comparator<Pair<Integer,?>>.

If I wanted to simply map a function Function<T,U> to a resulting function Function<T,V> though a function Function<U,V> I could simply apply andThen() method like this:

Function<Integer, String> toBinary = Integer::toBinaryString;
Function<Pair<Integer, ?>, Integer> left = Pair::left;

var pairToBinary = left.andThen(toBinary); // has type Function<Pair<Integer, ?>, String>

Is it possible to obtain Comparator<Pair<Integer,?>> in a similar way?



UICollectionView: make first item's width different from the rest

I'm currently trying to achieve the following layout using NSCollectionLayoutSection. Do you have any advice on only making the first item 50px wide while keeping the rest of the items 100px (could be any number of items)? The solution has to be an NSCollectionLayoutSection.

enter image description here

I'm currently displaying them in the same width using which is not the desired result:

    let item = NSCollectionLayoutItem(layoutSize: .init(widthDimension: .fractionalWidth(1.0), heightDimension: .fractionalHeight(1.0)))
    item.contentInsets = NSDirectionalEdgeInsets(top: 0,
                                                 leading: 0,
                                                 bottom: 0,
                                                 trailing: 8)
    
    let group = NSCollectionLayoutGroup.horizontal(layoutSize: NSCollectionLayoutSize(widthDimension: .absolute(100),
                                             heightDimension: .absolute(100)), subitems: [item])
    
    let section = NSCollectionLayoutSection(group: group)
    section.contentInsets = NSDirectionalEdgeInsets(top: 16,
                                                   leading: 16,
                                                   bottom: 16,
                                                   trailing: 16)
    section.orthogonalScrollingBehavior = .continuous

enter image description here

I've also tried using absolute widths but didn't have much luck with that approach.

Thank you!



How to fix the number of decimals to 500 digits output in Python?

In the following example:

import math
x = math.log(2)
print("{:.500f}".format(x))    

I tried to get 500 digits output I get only 53 decimals output of ln(2) as follows:

0.69314718055994528622676398299518041312694549560546875000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

How I can fix this problem?



Parse p nodes text including sibling nodes until the next p node

Weird title, I know. I am trying to parse an XML document which is kind of structured in paragraph. However, sometimes there are additional nodes which should be inside a paragraph but simply aren't.

What I need is to find each paragraph but also select everything until the next paragraph up to a "termination" node e.g. which is here the title node.

Here's an example:

<p typ="ct">(1) This is rule one</p>
<ol>
  <li>With some text</li>
  <li>that I want to parse</li>
</ol>
<p typ="ct">(2) And here is rule two</p>
<p typ="ct">(3) and rule three</p>
<title>Another section</title>

My desired output would be something like:

[
  "(1) This is rule one\nWith some text\nthat I want to parse", 
  "(2) And here is rule two", 
  "(3) and rule three"
]

If know I can select each paragraph using something like soup.select("p[typ=ct]") or soup.find_all("p", attr=dict(typ="ct") but it's those parts inbetween which I am not sure how to parse in a soupy way.



2022-08-22

ModuleNotFoundError in PIP package install in Conda Environment

I am trying to install a package in a new conda environment using the pip command. It installs, but with errors, and I get ModuleNotFoundError in the IDE.

The steps:

conda create --name facebookscraper python=3.8 all goes well

conda activate facebookscraper all goes well

conda install pip all goes well

pip install facebook-scraper it installs, but at the end of the installation I get multiple WARNING: Target directory /opt/homebrew/lib/python3.9/site-packages/XYZPackageName already exists:

facebookscraper) macbook@macbook ~ % pip install facebook-scraper
Collecting facebook-scraper
Using cached facebook_scraper-0.2.58-py3-none-any.whl (44 kB)
Collecting demjson3<4.0.0,>=3.0.5
Using cached demjson3-3.0.5-py3-none-any.whl
Collecting dateparser<2.0.0,>=1.0.0
Using cached dateparser-1.1.1-py2.py3-none-any.whl (288 kB)
Collecting requests-html<0.11.0,>=0.10.0
WARNING: Target directory /opt/homebrew/lib/python3.9/site-packages/tzlocal already exists. Specify --upgrade to force replacement.
WARNING: Target directory /opt/homebrew/lib/python3.9/site-packages/dateutil already exists. Specify --upgrade to force replacement.
WARNING: Target directory /opt/homebrew/lib/python3.9/site-packages/requests-2.28.1.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /opt/homebrew/lib/python3.9/site-packages/cssselect already exists. Specify --upgrade to force replacement.
WARNING: Target directory /opt/homebrew/lib/python3.9/site-packages/bin already exists. Specify --upgrade to force replacement.

When I run the following code on Visual Code Studio (using the facebookscraper conda environment as interpreter):

from facebook_scraper import get_posts
for post in get_posts('nintendo', pages=1):
    print(post['text'][:50])

I get the following ModuleNotFoundError:

[Running] python -u "/Users/macbook/Coding/python-facebook-scraper/main.py"
Traceback (most recent call last):
  File "/Users/macbook/Coding/python-facebook-scraper/main.py", line 1, in <module>
    from facebook_scraper import get_posts
ModuleNotFoundError: No module named 'facebook_scraper'

[Done] exited with code=1 in 0.11 seconds

I tried for force repalcement with pip install facebook-scraper --upgrade I get the same ModuleNotFoundError.

What am I doing wrong?

P.S.: The reason why I am using pip install facebook-scraper and not conda conda install facebook-scraper is because the package facebook-scraper is not found in the conda channels.



FluentValidation ILanguageManager.GetString() not invoked for custom Rules

I have a custom rule like this:

public static IRuleBuilderOptionsConditions<T, string?> MustBeCool<T>(this IRuleBuilder<T, string?> ruleBuilder)
{
    return ruleBuilder.Custom((input, context) =>
    {
        if(/*input is not cools*/)
        {
            context.AddFailure("Not cool.");
        }
    });
}

I also have a custom implementation of the ILanguageManager which pulls translations for validation messages from a database. My custom LanguageManager works fine for built-in rules. My problem now is that the ILanguageManager.GetString(...) is not getting called for my custom rule. I guessed that this might be because there already is a valiation error message provided so I tried to add the failure like this:

context.AddFailure(new ValidationFailure
{
    PropertyName = context.PropertyName,
    ErrorCode = "MustBeCoolValidator"
    // no error message provided
});

That doesn't work either. An empty validation error message is returned. In my case the validation rules and the error message translations don't live in the same town so I can't really provide the translated validation error message where the rule is declared.

Is there a way to invoke the ILanguageManager.GetString() for my custom rule?



DataContractSerializer fails for List

I have changed my serialization to DataContracts but now I am having problem with a specific class. It works fine on my Mac, but not on my android devices when built using IL2CPP. The thread stops at the writeObject function. My three classes related to the error:

[DataContract]
[KnownType(typeof(TaskIdentifier))]
[KnownType(typeof(TraceableTaskItem))]
[KnownType(typeof(List<TraceableTaskItem>))]
public class TraceableTaskContainer
{
    [DataMember]
    protected TaskIdentifier _taskIdent;

    [DataMember]
    protected List<TraceableTaskItem> _lNotAccomplishedTaskItems = new List<TraceableTaskItem>();

//.....
}
[DataContract]
[KnownType(typeof(DateTime))]
[KnownType(typeof(ItemReviewStage))]
public class TraceableTaskItem : GenericTaskItem, IEquatable<TraceableTaskItem>, IComparable<TraceableTaskItem>
{
    [DataMember]
    public string sDisplayTextInTraceableTaskReport;

    [DataMember]
    protected DateTime NextReviewDate;

    [DataMember] //ItemReviewStage is a enum
    protected ItemReviewStage reviewStage = ItemReviewStage.NewTask;

   
    public TraceableTaskItem() //important to deserialize old classes, do not remove it
    {

    }
//....
}
[DataContract]
//[KnownType(typeof(List<bool>))]
abstract public class GenericTaskItem
{
    [DataMember]
    public string sItemID = "";

    //[DataMember]
    protected List<bool> lTimesAnsweredCorrectly = new List<bool>();

    protected List<List<string>> llWrongAnswers = new List<List<string>>();

//...
}

The code works with the commented lines above. But as soon as I uncomment DataMember on the lTimesAnsweredCorrely and with or without uncommenting the equivalent KnownType line (I have tested both), the code stops working on my mobile devices. Any idea how can I fix this?

Exception:

"System.Reflection.TargetInvocationException: 
Exception has been thrown by the target of an invocation. 
---> System.ExecutionEngineException: Attempting to call method \'System.Runtime.Serialization.XmlObjectSerializerWriteContext::
IncrementCollectionCountGeneric<System.Boolean>\' 
for which no ahead of time (AOT) code was generated.\n  at 
System.Reflection.MonoMethod.Invoke (System.Object obj, 
System.Reflection.BindingFlags invokeAttr, 
System.Reflection.Binder binder, System.Object[] parameters, 
System.Globalization.CultureInfo culture) [0x00000] 
in <00000000000000000000000000000000>:0 \n  at 
System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] 
in <00000000000000000000000000000000>:0 \n  at System.Runtime.Serialization.XmlFormatWriterInterpreter.WriteCollection (System.Runtime.Serialization.CollectionDataContract collectionContract) [0x00000] 
in <00000000000000000000000000000000>:0 \n  at 
System.Runtime.Serialization.XmlFormatWriterInt… string



 StackTrace: "  at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <00000000000000000000000000000000>:0 \n  at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <00000000000000000000000000000000>:0 \n  at System.Runtime.Serialization.XmlFormatWriterInterpreter.WriteCollection (System.Runtime.Serialization.CollectionDataContract collectionContract) [0x00000] in <00000000000000000000000000000000>:0 \n  at System.Runtime.Serialization.XmlFormatWriterInterpreter.WriteCollectionToXml (System.Runtime.Serialization.XmlWriterDelegator xmlWriter, System.Object obj, System.Runtime.Serialization.XmlObjectSerializerWriteContext context, System.Runtime.Serialization.CollectionDataContract collectionContract) [0x00000] in <00000000000000000000000000000000>:0 \n  at System.Runtime.Serialization.XmlForma… string

Source: "mscorlib" string

inner exception: 
 InnerException "System.ExecutionEngineException: Attempting to call method \'System.Runtime.Serialization.XmlObjectSerializerWriteContext::
IncrementCollectionCountGeneric<System.Boolean>\' for which no ahead of time (AOT) code was generated.\n  at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <00000000000000000000000000000000>:0 \n  at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <00000000000000000000000000000000>:0 \n  at System.Runtime.Serialization.XmlFormatWriterInterpreter.WriteCollection (System.Runtime.Serialization.CollectionDataContract collectionContract) [0x00000] in <00000000000000000000000000000000>:0 \n  at System.Runtime.Serialization.XmlFormatWriterInterpreter.WriteCollectionToXml (System.Runtime.Serialization.XmlWriterDelegator xmlWriter, System.Object obj, System.Ru… System.Exception

Update

The problem seems to be with bool and int only, a List of string works just as expected.



Pandas Similar rows Search

How would I filter data on multiple criteria through the spreadsheet using python(pandas)?

I am trying to filter transactions with all Curr1=USD, where Trade Time within 1 minute, Have the same Notional 1 and have the Price within .5% spread between transactions. Then the row with the furthest(highest) Maturity would be moved to a different Sheet in excel.

Example of the data: GoogleDrive Excel File

Thank you in advance!



Improve gremlin traversal query performance

I would like to start from a particular source node ( id = 01F546'), traverse both direction for x number (4 in the below example) of hops , and list the properties of the first 200 destination nodes meeting certain creteria ( 'type' = output' in the below sample). I have set up the timeLimt to make sure the query return before time out.

I have created composite/mixed indexes on 'id', 'type' each.

For a graph of 250k nodes and 400k edges, the above query takes about ~7 second via gremline query console. What can be done to speed up the performance?

Thank you

Gremlin query & profile() results are as below

g.V().
  has('id', eq('01F546')).emit().
  repeat(bothE().otherV().timeLimit(300000)).times(4).
  has('type', eq('output')).
  map(properties().group().by(key()).by(value())).
  dedup().
  limit(200).
  toList()

The output of the profile is:

HasStep([type.eq(output)])                                            17          17         627.249    90.79
TraversalMapStep([JanusGraphMultiQueryStep, Jan...                    17          17           3.129     0.45
  JanusGraphMultiQueryStep                                            17          17           0.207
  JanusGraphPropertiesStep(property)                                 180         180           1.520
    \_condition=(PROPERTY AND visibility:normal)
    \_orders=[]
    \_isFitted=true
    \_isOrdered=true
    \_query=SliceQuery[0x40,0x60)
    \_multi=true
    \_vertices=1
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
  GroupStep(key,[PropertyValueStep])                                  17          17           1.204
    PropertyValueStep                                                180         180           0.279
DedupGlobalStep(null,null)                                            11          11           0.202     0.03
RangeGlobalStep(0,20)                                                 11          11           0.151     0.02
                                            >TOTAL                     -           -         690.901        -
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.000
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.000
    optimization                                                                               0.001
    optimization                                                                               0.000
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.001
    optimization                                                                               0.000
    optimization                                                                               0.000
    optimization                                                                               0.000
    optimization                                                                               0.001
    optimization                                                                               0.000
    optimization                                                                               0.000
  GroupStep(key,[PropertyValueStep])                                 109         109           6.813
    PropertyValueStep                                               1083        1083           1.854
DedupGlobalStep(null,null)                                            76          76           0.610     0.01
                                            >TOTAL                     -           -        6721.808        -