2023-04-30

Search and print result along with earlier information

I have total 30 test result files, each having 12 iterations in it. The structure of the file is as below:

File1_loc/result.txt

# starting information
# User information
# Time stamps
# Random infomration
# Thousnads of lines in between
# ----------------- Iteration 1 ----------------------
# $Test show addr 0x2341233  data 0x241341
# $Test matches Pass
# $Test show addr 0x123324  data 0x223245
# $Test matches Pass
# Few hundreds line
# $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE
# ******:time ns: CHANGE ERROR COUNT TO: 1
# $Test show addr 0x2341233  data 0x241341
# $Test matches Pass
# $Test show addr 0x123324  data 0x223245
# $Test matches Pass
# Few hundreds line
# ----------------------------------------------------
# ----------------- Iteration 2 ----------------------
# $Test show addr 0x2341233  data 0x241341
# $Test matches Pass
# $Test show addr 0x123324  data 0x223245
# $Test matches Pass
# Few hundreds line
# $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE
# ******:time ns: CHANGE ERROR COUNT TO: 2
# $Test show addr 0x2341233  data 0x241341
# $Test matches Pass
# $Test show addr 0x123324  data 0x223245
# $Test matches Pass
# Few hundreds line
# $Test time: ERROR: Results_dont_Match Loc: Actual=EF12321 Expected=DL298234
# ******:time ns: CHANGE ERROR COUNT TO: 3
# ----------------------------------------------------
This pattern continues
# ----------------- Iteration 12 ----------------------
# $Test show addr 0x2341233  data 0x241341
# $Test matches Pass
# $Test show addr 0x123324  data 0x223245
# $Test matches Pass
# Few hundreds line
# $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE
# ******:time ns: CHANGE ERROR COUNT TO: 4
# $Test show addr 0x2341233  data 0x241341
# $Test matches Pass
# $Test show addr 0x123324  data 0x223245
# $Test matches Pass
# Few hundreds line
# ----------------------------------------------------

I do have total 30 files like this. The file contains results and ERROR, if mismatch. I am interested to print the following information for each result.txt file.

    File1_Summary:
    # ----------------- Iteration 1 ----------------------
    # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE
    # ******:time ns: CHANGE ERROR COUNT TO: 1
    # ----------------- Iteration 2 ----------------------
    # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE
    # ******:time ns: CHANGE ERROR COUNT TO: 2
    # $Test time: ERROR: Results_dont_Match Loc: Actual=EF12321 Expected=DL298234
    # ******:time ns: CHANGE ERROR COUNT TO: 3
    # ----------------- Iteration 12 ----------------------
    # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE
    # ******:time ns: CHANGE ERROR COUNT TO: 4
    File2_Summary:
    # ----------------- Iteration 1 ----------------------
    # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE
    # ******:time ns: CHANGE ERROR COUNT TO: 1
    # ----------------- Iteration 12 ----------------------
    # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE
    # ******:time ns: CHANGE ERROR COUNT TO: 2

I have used awk and search for the 4th field matching ERROR, which prints out the lines. However, I would like to also print out the Iteration # information.

awk '$4 ~/ERROR/' File1_loc/result.txt


VS Code: Connecting to a python interpreter in docker container without using remote containers

I know it is generally possible to connect to a container's python interpreter with:

  • remote containers
  • remote ssh

The problems I have with these solutions:

  • it opens a new window where I need to install/specify all extensions again
  • it opens a new window per container. I am working in a monorepo where each services's folder is mounted in a different container (connected via docker compose)

Is there a solution that allows me to specify a remote container to connect to simply for the python interpreter (and not for an entirely new workspace)?



Stop/kill Bigquery in console

We have an use case regrading the big query.

We have one dashboard to identify long running query or query which scanned more data and need to optimize. Suppose any user writing a query in bigQuery Cloud Console, and it showing it will scan 300 GB data, so any method to stop that user before running the query in realtime.

I try with cloud function and pub/sub, but problem is, I can not figure it out, which bigQuery event will trigger that function



Installing specific version of NodeJS and NPM on Alpine docker image

I need to use a standard Alpine docker image and install a specific version of Node and NPM. Heres is my attempt so far:

FROM alpine:3.17.2

RUN apk update
RUN apk upgrade
RUN apk add bash git helm openssh yq github-cli

RUN apk add \
    curl \
    docker \
    openrc

# nvm environment variables
ENV NVM_DIR /usr/local/nvm
ENV NVM_VERSION 0.39.3
ENV NODE_VERSION 18.16.0

# install nvm
# https://github.com/creationix/nvm#install-script
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v$NVM_VERSION/install.sh | bash

# install node and npm
RUN source $NVM_DIR/nvm.sh \
    && nvm install $NODE_VERSION \
    && nvm alias default $NODE_VERSION \
    && nvm use default

# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH

RUN ls -asl $NVM_DIR/versions/node/v$NODE_VERSION/bin
RUN ls -asl $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules/npm/bin

RUN $NVM_DIR/versions/node/v$NODE_VERSION/bin/node -v

RUN $NVM_DIR/versions/node/v$NODE_VERSION/bin/npm install --global yarn

# Start docker on boot
RUN rc-update add docker boot

# Default commands to bash
ENTRYPOINT ["bash"]

I am getting this:

#7 [ 4/10] RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.39.3/install.sh | bash
#7 sha256:76a5a08c3c01075cd22585bc1f3df8f47fe258b116742db843cea6fa553a09c6
#7 0.181   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
#7 0.182                                  Dload  Upload   Total   Spent    Left  Speed
100 15916  100 15916    0     0  57463      0 --:--:-- --:--:-- --:--:-- 60287
#7 0.478 => Downloading nvm from git to '/usr/local/nvm'
=> Cloning into '/usr/local/nvm'...
#7 3.239 * (HEAD detached at FETCH_HEAD)
#7 3.240   master
#7 3.268 => Compressing and cleaning up git repository
#7 3.307
#7 3.338 => Profile not found. Tried ~/.bashrc, ~/.bash_profile, ~/.zprofile, ~/.zshrc, and ~/.profile.
#7 3.338 => Create one of them and run this script again
#7 3.338    OR
#7 3.338 => Append the following lines to the correct file yourself:
#7 3.338
#7 3.338 export NVM_DIR="/usr/local/nvm"
#7 3.338 [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
#7 3.338
#7 3.440 => Installing Node.js version 18.16.0
#7 4.948 Downloading and installing node v18.16.0...
#7 5.646 Downloading https://nodejs.org/dist/v18.16.0/node-v18.16.0-linux-x64.tar.gz...
######################################################################## 100.0%
#7 8.333 Computing checksum with sha256sum
#7 8.832 Checksums matched!
#7 11.93 Now using node v18.16.0 (npm v)
#7 12.41 Creating default alias: default -> 18.16.0 (-> v18.16.0 *)
#7 12.63 Failed to install Node.js 18.16.0
#7 12.63 => Close and reopen your terminal to start using nvm or run the following to use it now:
#7 12.63
#7 12.63 export NVM_DIR="/usr/local/nvm"
#7 12.63 [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
#7 DONE 12.7s

I am not sure about the "Failed to install Node.js 18.16.0" message, as you will see in my tests with "ls" it seems to be installed?

First I "ls" the bin directory where I expect node to be installed:

RUN ls -asl /usr/local/nvm/versions/node/v18.16.0/bin
#9 sha256:42ba843ab812861bf82e0e493e211095cc749408940b5ec21ce94fabe3997538
#9 0.138 total 88820
#9 0.141      4 drwxr-xr-x    2 1000     1000          4096 Apr 27 07:35 .
#9 0.141      4 drwxr-xr-x    6 root     root          4096 Apr 27 07:35 ..
#9 0.141      0 lrwxrwxrwx    1 root     root            45 Apr 27 07:35 corepack -> ../lib/node_modules/corepack/dist/corepack.js
#9 0.141  88812 -rwxr-xr-x    1 1000     1000      90940576 Apr 12 05:31 node
#9 0.141      0 lrwxrwxrwx    1 root     root            38 Apr 27 07:35 npm -> ../lib/node_modules/npm/bin/npm-cli.js
#9 0.141      0 lrwxrwxrwx    1 root     root            38 Apr 27 07:35 npx -> ../lib/node_modules/npm/bin/npx-cli.js
#9 DONE 0.1s

This seem ok to me, is it not?

My other "ls" gives this:

RUN ls -asl /usr/local/nvm/versions/node/v18.16.0/lib/node_modules/npm/bin
#10 sha256:c2872332dbb58f191400fa23211a691db9a5f5dc07425bc9d3c83bf1cafb31f8
#10 0.141 total 36
#10 0.144      4 drwxr-xr-x    3 1000     1000          4096 Apr 27 07:35 .
#10 0.144      4 drwxr-xr-x    7 1000     1000          4096 Apr 27 07:35 ..
#10 0.144      4 drwxr-xr-x    2 1000     1000          4096 Apr 27 07:35 node-gyp-bin
#10 0.144      4 -rwxr-xr-x    1 1000     1000          1365 Oct 11  2022 npm
#10 0.144      4 -rwxr-xr-x    1 1000     1000            54 Oct 11  2022 npm-cli.js
#10 0.144      4 -rwxr-xr-x    1 1000     1000           483 Oct 11  2022 npm.cmd
#10 0.144      4 -rwxr-xr-x    1 1000     1000          1567 Oct 11  2022 npx
#10 0.144      4 -rwxr-xr-x    1 1000     1000          2922 Dec  7 06:00 npx-cli.js
#10 0.144      4 -rwxr-xr-x    1 1000     1000           539 Oct 11  2022 npx.cmd
#10 DONE 0.2s

So npm is here also, looking good?

In my initial post here, I was giving the wrong error message, I am sorry. The error message was from my local machine, which is a Mac with the M2 CPU, and I believe this caused that error, but that is for another day. I need it to run on the build servers, where we are running linux on amd64 so there I have a different error, the following error:

Step 14/17 : RUN $NVM_DIR/versions/node/v$NODE_VERSION/bin/node -v
 ---> Running in d63ec3b287d9
/bin/sh: /usr/local/nvm/versions/node/v18.16.0/bin/node: not found
The command '/bin/sh -c $NVM_DIR/versions/node/v$NODE_VERSION/bin/node -v' returned a non-zero code: 127

Error: Process completed with exit code 127.

So my "ls" say node is there but when I try and run it, it is not?

Best regards
Søren



Using @Context for a mapstruct mapper is considered like an additional parameter

I have a mapper where I'm trying to use a repository as context so I can fetch my object during the mapping. I spent a lot of time searching how context work and this is what I came up with

@Mapper(componentModel = MappingConstants.ComponentModel.SPRING, nullValueMappingStrategy = NullValueMappingStrategy.RETURN_DEFAULT)
public interface EnumValueMapper {
    @Mapping(target = "enumDefinition", source = "enumDefinitionId", qualifiedByName = "getEnumDefinition")
    EnumValue toEntity(EnumValueBean enumValueBean);

    @Mapping(target = "enumDefinitionId", source = "enumDefinition.name")
    EnumValueBean toBean(EnumValue enumValue);

    @Named("getEnumDefinition")
    static EnumDefinition getEnumDefinition(String enumDefinitionName, @Context IEnumDefinitiontDao enumDefinitionDao) {
        return enumDefinitionDao.findById(enumDefinitionName).orElse(null);
    }
}

The issue with this is that I get the error

Qualifier error. No method found annotated with @Named#value: [ getEnumDefinition ]. See https://mapstruct.org/faq/#qualifier for more info.

due to the compiler seeing @Context as a 2nd parameter and not finding a named "getEnumDefinition" that matches the 1 String parameter method. If I remove the @Context parameter the error disappears and the build succeed. I'm confused at what I'm doing wrong because I didn't see anyone explicitly giving a context inside a @Mapping, they just give their source and the name for the qualifiedByName and it finds the correct method despite the @Context.



2023-04-29

Model is nil, even though a function creates it

In my function, it clones a random pre-set model and returns the model, however it returns the model as nil. I want to know how and why the model is set to nil even after the function returns the object value. function in question vvvv

function choosehallway(j,i)
    local num = math.random(1, #hallwaychildren)
    local qawrestdryhouijmp = false
    local g
    local function f()
        local hallway = hallwaychildren[num]:Clone()
        hallway.Parent = Tiles
        hallway.Name = #Tiles:GetChildren()
        hallway.PrimaryPart = hallway.Floor
        Grid[#Grid + 1] = hallway
        hallway:PivotTo(CFrame.new(TileSize.X*j,Origin.Y,TileSize.Z*i))
        qawrestdryhouijmp = true
        return hallway
    end
    
    --btw, grid[#grid] is the previous grid space, not the current one.
    --grid[#grid + 1] is the current space, grid[#grid + 1 + gridX] is the one below the current 
    --one, grid[#grid + 1 - gridX] is the space above the current space, and grid[#grid + 2]
    --is the space after the current space.
    --by current space I mean the space we are choosing rn.
    
    if Grid[#Grid] then --if not first, then...
        if Grid[#Grid]:FindFirstChild("E") == nil and hallwaychildren[num]:FindFirstChild("W") == nil then
            if qawrestdryhouijmp == false then
                g = f()
            end
        elseif Grid[#Grid + 1 - GridX] then
            if Grid[#Grid + 1 - GridX]:FindFirstChild("N") == nil and hallwaychildren[num]:FindFirstChild("S") == nil then
                if qawrestdryhouijmp == false then
                    g = f()
                end
            end
        elseif Grid[#Grid + 1 + GridX] then
            if Grid[#Grid + 1 + GridX]:FindFirstChild("S") == nil and hallwaychildren[num]:FindFirstChild("N") == nil then
                if qawrestdryhouijmp == false then
                    g = f()
                end
            end
        elseif Grid[#Grid + 2] then
            if Grid[#Grid + 2]:FindFirstChild("W") == nil and hallwaychildren[num]:FindFirstChild("E") == nil then
                if qawrestdryhouijmp == false then
                    g = f()
                end
            end
        else
            local hallway = Hallways.SNWE:Clone()
            hallway.Parent = Tiles
            hallway.Name = #Tiles:GetChildren()
            hallway.PrimaryPart = hallway.Floor
            Grid[#Grid + 1] = hallway
            hallway:PivotTo(CFrame.new(TileSize.X*j,Origin.Y,TileSize.Z*i))
            qawrestdryhouijmp = true
            g = hallway
        end
    else --if first then...
        g = f()
    end
    
    if g == nil then
        wait(.1)
        return choosehallway(j,i)
    else
        return g
    end
end

I tried changing the order and ensuring the function returns a model every time it is called (somehow it still returned nil??). I even tried forcing the function to be continuously called until it didn't return nil, which also didn't work for whatever reason. It just stops without an error every time the function returns nil, with the rest of the map not generating. There also seems to be a random grid space all the way out in the middle of nowhere, idk why.

image of generation example

Edit: fixed it. new code vvvv

function choosehallway(j,i)
    local num = math.random(1, #hallwaychildren)
    local qawrestdryhouijmp = false
    local g
    local function f()
        local hallway = hallwaychildren[num]:Clone()
        hallway.Parent = Tiles
        hallway.Name = #Tiles:GetChildren()
        hallway.PrimaryPart = hallway.Floor
        Grid[#Grid + 1] = hallway
        hallway:PivotTo(CFrame.new(TileSize.X*j,Origin.Y,TileSize.Z*i))
        qawrestdryhouijmp = true
        return hallway
    end
    
    local function s(thing, dir, dir2)
        local Space = Grid[#Grid + thing]
        if Space ~= nil then
            if Space:FindFirstChild(dir) == nil and hallwaychildren[num]:FindFirstChild(dir2) == nil then
                return true
            end
        end
    end
    
    --btw, grid[#grid] is the previous grid space, not the current one.
    --grid[#grid + 1] is the current space, grid[#grid + 1 + gridX] is the one below the current 
    --one, grid[#grid + 1 - gridX] is the space above the current space, and grid[#grid + 2]
    --is the space after the current space.
    --by current space I mean the space we are choosing rn.
    
    local hc = hallwaychildren[num]
    
    g = f()
    if s(0, "E","W") == true then
        return g
    else
        if s(GridUp, "S","N") == true then
            return g
        else
            if s(GridDown, "N","S") == true then
                return g
            else
                if s(GridRight, "W","E") == true then
                    return g
                else
                    wait(.1)
                    g.Parent = nil
                    g = choosehallway(j,i)
                    return g
                end
            end
        end
    end
end


Issue spawning player

I have a door system in my game that allows the player to switch scenes. Depending on the door he triggers, he is suppose to be placed in different spots on the map.

All my doors have a box collider and a sceneSwitcher script attached to them that looks like this:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.SceneManagement;
using UnityEngine.UI;
using TMPro;

public class SceneSwitcher : MonoBehaviour
{
    float distance;
    public GameObject player;
    public GameObject interactUI;
    public Animator Transition;
    public string levelToLoadNext;
    public float newSpawPointx;
    public float newSpawPointy;
    public float newSpawPointz;

    void OnMouseOver()
    {
        distance = Vector3.Distance(player.transform.position, this.transform.position);
        if (distance <= 5.5f)
        {
            interactUI.SetActive(true);
            gameStatsManager.levelToLoad = levelToLoadNext;
            gameStatsManager.spawnPoint = new Vector3 (newSpawPointx, newSpawPointy, newSpawPointz);
            if (Input.GetMouseButtonDown(0))
            {
                StartCoroutine(LoadLevel());
            }
        }
        else
        {
            interactUI.SetActive(false);
        }
    }
    void OnMouseExit()
    {
        interactUI.SetActive(false);
    }

    void doorAnimationScene()
    {
        SceneManager.LoadScene("doorAnimationScene");
    }

    IEnumerator LoadLevel()
    {
        Transition.SetTrigger("Start");
        yield return new WaitForSeconds(1);
        doorAnimationScene();

    }
}

While the door animation is being player this little script is being called:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.SceneManagement;

public class loadScene : MonoBehaviour
{

    public void loadCorrectLevel()
    {
        SceneManager.LoadScene(gameStatsManager.levelToLoad);
        Debug.Log(gameStatsManager.spawnPoint);
    }
}

That just tells the sceneManager what scene to load next.

The gameStatsManager script that is being called in the sceneSwitcher script is this:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class gameStatsManager : MonoBehaviour
{
    public static string levelToLoad;
    public static Vector3 spawnPoint = new Vector3(1381.94f, 60.86f, 859.23f);
}

It is not in the scenes, it is just a script that holds 2 static variables, levelToLoad which tells the game what scene to load next and spawnPoint which tells the game the vector3 coordinates where the player should be spawned.

When the player places the mouse over the door the levelToLoad and spawnPoint variables are updated automatically.

Each scene has a playerSpawnPointSet empty object with the following script attached to it:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class playerSpawnPointSet : MonoBehaviour
{
    public GameObject player;
    // Start is called before the first frame update
    void Start()
    {
        player = GameObject.FindWithTag("Player");
        player.transform.position = gameStatsManager.spawnPoint;
    }

    // Update is called once per frame
    void Update()
    {
        
    }
}

The script takes the gameStatsManager.spawnPoint data and assigns it to the player's position.

Now my issue is that for some reason, sometimes the player's position doesn't get updated and when the scene is loaded he stays in his original position. My Debug.Log in the loadScene script is always correct (it is always the vector3 coordinates where the player should be placed) but for some reason Unity doesn't always place the player there...

It seems to mostly happen when I first load the project. Any idea why my system is not always working like it should?



Split a CONVEX polygon into 2 parts vertically (python)

I have an array of points which represent the vertices of a CONVEX polygon. example: [[1,0],[3,0],[4,2],[0,5],[2,8]]

Now I am trying to do is split the polygon into 2 equal parts from its longest side (vertically). For example as the image shows below: Also it doesn't have to be exactly equal areas it can be a close number to 2 halves. The image shows what kind of cut I am looking for. The red line could have cut horizontally using y axis but polygon was more stretched and long on the x axis so it split it on x axis.

Also I am JUST looking for 2 split no more than that.

[IMAGE]

i tried to search online for some solutions but wasn't able to find a proper resource.



Making a guessing game in C but outputs are different when I change variable types [duplicate]

I am new with C first of all. I was trying to write a guessing game. It is simple, you are just guessing until you find the number that program holds in "no_to_guess" variable. The problem is if I turn variables to type of char program prints "Guess the number" prompt twice but when I change it back to int type it all works correctly. Do you guys have any idea?

int main(){
    int no_to_guess = 5,guess;
    
    while(guess != no_to_guess){
        printf("Guess a number: ");
        scanf("%d",&guess);
        if(guess == no_to_guess) printf("You Won");
     }
    return 0;
}

There is no problem with int type but when I change variables to char type the program prompts user twice to guess. I know using char type don't make sense but I just wonder why this is happening.

 char no_to_guess = '5',guess;
    
    while(guess != no_to_guess){
        printf("Guess a number: ");
        scanf("%c",&guess);
        if(guess == no_to_guess) printf("You Won");
}

for char type it prompts: Guess the number: Guess the number: for int type it works it supposed to be: Guess the number:



How can I deploy a simple server container to ECS Fargate?

I've been trying to deploy my container for the last 8 days all day, but still can't (I know)

It is a simple express.js API that listens for http requests on port 80

Before building the image and pushing it to DockerHub I created an endpoint for the "check health status":

app.get("/health", (_, res: Response) => res.sendStatus(200));

const port = process.env.SERVER_PORT;

app.listen(port, async () => {
  console.log(`Server is listening on port ${port}. 🚀`);
});

I checked this path on my local machine and it seems to be working: Print screen

Then I tried deploying it to AWS ECS Fargate: Created the cluster, created the task definition following the instructions that I used in the docker-compose.yaml file, created a service based on the task definition I created and launched that service. At the AWS console everything seems to working fine too: Print screen

So I tried sending a HTTP request to the DNS name of the Load balancer /health, and also to the IP/port address /health, but the requests all timeout: Print screen



Setting `proxy-request-buffering off` leads to removing original `Content-length` header

I noticed that Nginx ingress caches big requests and I found that such behaviour can be switched off by nginx.ingress.kubernetes.io/proxy-request-buffering: "off" that in terms of Nginx is equal to proxy-request-buffering off

After that I started receiving no Content-Length header on my server side although it is sent on the client side

Is it expected behaviour? Can I receive the original Content-Length?



2023-04-28

Log files containing vehicle information in Veins 5.2.i1

I am very new to the Veins OMNet++ environment. I ran the example simulation and then looked for the log files. I found the "\instant-veins-5.2-i1\Logs\VBox.log" file but I did not see the vehicle details like "Timestamp, Vehicle Node ID, Position, speed, etc.".
It had the VM info like this:

00:00:02.927357 Log opened 2023-04-27T03:31:51.661539800Z
00:00:02.927358 Build Type: release
00:00:02.927360 OS Product: Windows 11
00:00:02.927361 OS Release: 10.0.22621
00:00:02.927361 OS Service Pack: 
00:00:03.018921 DMI Product Name: HP ENVY x360 Convertible 15m-ed1xxx
00:00:03.022306 DMI Product Version: Type1ProductConfigId
00:00:03.022323 Firmware type: UEFI
00:00:03.022761 Secure Boot: VERR_PRIVILEGE_NOT_HELD
00:00:03.022803 Host RAM: 12049MB (11.7GB) total, 2922MB (2.8GB) 

I found instructions in the manual: https://doc.omnetpp.org/omnetpp/manual/#sec:sim-lib:log-output https://sumo.dlr.de/docs/Simulation/Output/index.html https://sumo.dlr.de/docs/Simulation/Output/RawDump.html

Each explains to use the and tags in the config files. Which specific files do this go in? What directory contains the actual log files?

Any help will be appreciated.

Update: I see Output of additional files (detectors, induction loops) are not being generated had a similar problem.

The answer here is: "You will either turn this off (see the output of --help)" I am now looking for "--help".



Tag a player who touches script.Parent

So I want this to tag a player tho it just does nothing!

local playersTouching = {}

local function onPartTouched(part, player)
    if part == script.Parent and player and player:IsA("Player") then
        player:SetAttribute("Hiding", true)
        playersTouching[player] = true
    end
end

local function onPartTouchEnded(part, player)
    if part == script.Parent and player and player:IsA("Player") then
        player:SetAttribute("Hiding", nil)
        playersTouching[player] = nil
    end
end

script.Parent.Touched:Connect(function(otherPart)
    local humanoid = otherPart.Parent:FindFirstChildOfClass("Humanoid")
    if humanoid then
        onPartTouched(script.Parent, humanoid.Parent)
    end
end)

script.Parent.TouchEnded:Connect(function(otherPart)
    local humanoid = otherPart.Parent:FindFirstChildOfClass("Humanoid")
    if humanoid then
        onPartTouchEnded(script.Parent, humanoid.Parent)
    end
end)

So I executed it on Roblox and the Player touches the script.Parent and doesnt get tagged. Also goal of this is to detect when player is touching certain part when another part with a script touches them. If my method doesnt work please give me another method. Thx!

AMMEDNMENT: Now my detecing script aint working lol so is it the same issue where I am getting char instead of player:

script.Parent.Touched:Connect(function(player)
    print("TOUCHED")
    if player:GetAttribute("Hiding") then
        print("tag player touch????")
    else
        print("HMM")

    end
end)


Sentry not logging errors when I am in debug mode in VSCode, but logging errors normally when I run the app with "flutter run"

I am having a weird bug with Sentry. Last week, Sentry was logging the errors impeccably, but this week it started to show a strange behavior. I always used debug mode in VSCode for the hot reload and such, and Sentry, right from the beginning, logged all the errors, but this week, it just stopped. After a bit of digging, I noticed that when I run the app via "flutter run" it logs, but when I run via debug mode, it doesn't. Does anyone know what could be happening? Dunno if it is code related, since it always worked and I didn't change anything



Does Remotelock through the Graph API work?

I'm trying to make remoteLock work however I'm always getting the following error:

{ "error": { "code": "BadRequest", "message": "{\r\n "_version": 3,\r\n "Message": "An error has occurred - Operation ID (for customer support): 00000000-0000-0000-0000-000000000000 - Activity ID: db4dc989-7ba7-4bf1-8498-62ef8c5defb5 - Url: https://fef.amsub0202.manage.microsoft.com/DeviceFE/StatelessDeviceFEService/deviceManagement/managedDevices('e85b9c69-fd4c-405c-bc34-2af1dc84f645')/microsoft.management.services.api.remoteLock?api-version=2022-07-29\",\r\n "CustomApiErrorPhrase": "",\r\n "RetryAfter": null,\r\n "ErrorSourceService": "",\r\n "HttpHeaders": "{}"\r\n}", "innerError": { "date": "2023-04-25T12:41:30", "request-id": "db4dc989-7ba7-4bf1-8498-62ef8c5defb5", "client-request-id": "db4dc989-7ba7-4bf1-8498-62ef8c5defb5" } } }

Am I doing something wrong?

Doing a GET on /deviceManagement/managedDevices/{managedDeviceId} gives me my device details. However the RemoteLock with a POST doesn't seem to work.

Following the documentation link below, I've set Accpet and Authorization headers corerectly and the body of the request is empty.

https://learn.microsoft.com/en-us/graph/api/intune-devices-manageddevice-remotelock?view=graph-rest-1.0



Strange issue with API data in WordPress

I'm experimenting with an API which returns acronym definitions. My site has 2 custom post types, 'acronym' and 'definition'. I'm generating a random 2 - 3 letter string and feeding it to the API. The idea is that, if the API returns data, the randomly generated acronym is added as an 'acronym' post and the definitions are added as 'definition' posts. The definitions are then linked to the acronym via an ACF field. If the randomly generated acronym already exists as an 'acronym' post, then the definitions are simply created and assigned to that acronym post.

I'm running the code using the WP admin footer hook for testing purposes. Here's a boiled down version of my code:

add_action('admin_footer', 'api_fetch');

function api_fetch(){

    $length = rand(2, 3);
    $randacro = '';
    for ($i = 0; $i < $length; $i++) {
        $randacro .= chr(rand(97, 122));
    }

    // Set the endpoint URL
    $url = 'https://apiurl.com/xxxxxxxxx/' . $randacro . '&format=json';

    // Initialize curl
    $curl = curl_init($url);

    // Set curl options
    curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($curl, CURLOPT_HTTPHEADER, array(
        'Content-Type: application/json'
    ));

    // Send the request and get the response
    $response = curl_exec($curl);

    // Close curl
    curl_close($curl);

    // Handle the response
    if ($response) {
        
        $resarray = json_decode($response);

        echo '<div style="padding: 0 200px 50px;">Randomly generated acronym: ' . strtoupper($randacro) . '<br><br>';

        
        if ( isset($resarray->result) ) {

            if (post_exists(strtoupper($randacro))) {
                $args = array(
                    'post_type' => 'acronym',
                    'post_title' => strtoupper($randacro),
                    'posts_per_page' => 1,
                );
                $query = new WP_Query( $args );
                if ( $query->have_posts() ) {
                    $acro_id = $query->posts[0]->ID;
                    wp_reset_postdata();
                }
            } else {
                $new_acro = array(
                    'post_title'    => strtoupper(wp_strip_all_tags($randacro)),
                    'post_status'   => 'publish',
                    'post_author'   => 1,
                    'post_type'     => 'acronym',
                );
                $acro_id = wp_insert_post( $new_acro );
            }

            if ($acro_id) {

                echo 'Found/created acronym post: ID = ' . $acro_id . ' - Title = ' . get_the_title($acro_id) . '<br><br><pre>';

                foreach($resarray->result as $result) {
                    print_r($result);
                    echo '<br><br>';
                    
                    if (!is_string($result)) {
                        $def = $result->definition;
                        if ( get_post_status( $acro_id ) && !post_exists( $def ) ) {
                            $new_def = array(
                                'post_title'    => wp_strip_all_tags($def),
                                'post_status'   => 'publish',
                                'post_author'   => 1,
                                'post_type'     => 'definition',
                            );
                            $new_def_id = wp_insert_post( $new_def );
                            update_field('acronym', $acro_id, $new_def_id);
                            update_field('likes', '0', $new_def_id);
                            $defs_published++;
                        }
                    }
                    
                }
            }
            

        }

        echo '</pre></div>';
        
    } else {
        echo '<div style="padding: 0 200px;">No response</div>';
    } 

}

The issue I'm having is that, when I refresh the page to run the code again after the code has created a new acronym, and it then finds an existing acronym, it's assigning the definitions to the previously created new acronym. For example, this time that the code was run, it created the acronym post "VC" as it didn't already exist:

Output example 1

I refreshed the page to run the code again, and the randomly generated acronym already exists as an 'acronym' post, but the definitions are being assigned to the previously created acronym post (shown as "Found/created acronym post"):

Output example 2

I've tried adding wp_reset_postdata() and wp_reset_query() throughout my code. I've tried setting the $acro_id to null at the beginning and end of the code and I've tried unsetting all the variables at the end of my code, but none of this has worked. Any ideas where I might be going wrong here?



2023-04-27

How to decrypt cCryptoGS (CryptoJS) string using OpenSSL?

Sample function in Google Apps Script using the cCryptoGS library:

function encrypt() {

  let message = "Hello world!";
  let pw = "password";

  let encrypted = cCryptoGS.CryptoJS.AES.encrypt(message, pw).toString();

  Logger.log(encrypted);

  // Sample result, changes each time due to the salt
  // U2FsdGVkX19A/TPmx/tmR9MRiKU9AQPhUYKD/lyoY/c=

};

Trying to decrypt using OpenSSL:

echo "U2FsdGVkX19A/TPmx/tmR9MRiKU9AQPhUYKD/lyoY/c=" | openssl enc -d -a -A -aes-256-cbc -iter 1 -md md5 -pass pass:'password' && echo

That command returned an error:

bad decrypt
803B3E80A67F0000:error:1C800064:Provider routines:ossl_cipher_unpadblock:bad decrypt:../providers/implementations/ciphers/ciphercommon_block.c:124:

OpenSSL version is OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022).

How could the cyphertext generated by cCryptoGS be decrypted using OpenSSL?



Change color of panel based on label in Grafana

I would like to change the color of my panel based on the panel title. I have a simple visualization of the tally of healthy and unhealthy.

The word "healthy" in the panel title is a dynamic value determined by a variable name $status.

I made it dynamic by

However, I want to change the color of the background if the value of $status is unhealthy.

Something, like this

enter image description here

I tried to do it by value mapping but nothing is happening. Is this possible to do?



Unable to display minor grid lines in ggplot2

I am trying to differentiate the tiles by displaying white minor grid lines but I am unable to get it to work. Could someone help me please.

This is what my function looks like. I have tried changing the panel.grid.minor to specify x & y gridlines as well. Didnt work. Help please. Thanks in advance

library(ggplot2)
library(tidyverse)

# Read the data
data <- read.table("pd_output.txt", header = TRUE, sep = "\t")

# Create a generic waterfall plot function
create_waterfall_plot <- function(data) {
  data <- data %>%
    mutate(mutation_types = factor(mutation_types),
           variant_consequences = factor(variant_consequences),
           impact = factor(impact),
           clinical_annotations = factor(clinical_annotations),
           TE_fusion = factor(TE_fusion),
           hotspot = factor(hotspot))
  
  plot <- ggplot(data, aes(x = sampleID, y = gene_name)) +
    theme_bw() +
    theme(panel.grid.major = element_blank(),
          panel.grid.minor = element_line(size = 2, colour ="white"),
          axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.5)) +
    geom_tile(aes(fill = variant_consequences, colour = mutation_types, alpha = 0.5), size = 0.5, width = 0.8, height = 0.8) +
    geom_point(aes(shape = mutation_types, colour = impact), size = 3) +
    scale_fill_manual(values = c("missense_variant" = "blue", "splice_donor_variant" = "orange", "stop_gained" = "darkgreen", "frameshift_variant" = "yellow", "inframe_deletion" = "brown", "missense_variant&splice_region_variant" = "violet", "stop_gained & inframe_deletion" = "gray", "inframe_insertion" = "cyan")) +
    scale_color_manual(values = c("MODERATE" = "lightpink", "HIGH" = "red")) +
    labs(x = "Sample ID", y = "Gene Name",
    fill = "Variant Consequences", colour = "Impact", shape = "CLONALITY") +
    
    guides(alpha = FALSE) 
    
  return(plot)
}

# Generate the waterfall plot
waterfall_plot <- create_waterfall_plot(data)
print(waterfall_plot)

Sample data looks like this

sampleID    gene_name   mutation_types  variant_consequences    impact  clinical_annotations    TE_fusion   hotspot
P-0028  NCOR1   CLONAL  missense_variant    MODERATE    localised   no  no
P-0029  SETD2   CLONAL  splice_donor_variant    HIGH    localised   yes yes
P-0030  ATM SUBCLONAL   stop_gained HIGH    localised   no  no
P-0031  CDKN1B  CLONAL  frameshift_variant  HIGH    localised   yes no
P-0032  KMT2C   CLONAL  stop_gained HIGH    metastatic  no  no
P-0033  FOXA1   CLONAL  stop_gained HIGH    metastatic  yes yes
P-0034  NCOR1   CLONAL  missense_variant    MODERATE    metastatic  yes no
P-0035  KMT2A   CLONAL  missense_variant    MODERATE    localised   yes no
P-0036  KMT2C   CLONAL  missense_variant    MODERATE    localised   yes no

current output plot looks like this



Yup nested validation schema with conditional validation

I have created a formik object with initial values similar to

{customerDetails: {id: "", name: "", mobileNumber: ""}, notes: {id: "", text: "", type: ""}}

how do i write a conditional yup validation schema for this object such that if id of customerDetails is present name and mobile number are not required but required if id is not present.

I tried something like this:

const validationSchema = Yup.object().shape({
    customerDetails: Yup.object().shape({
        id: Yup.string(),
        firstName: Yup.string().when('id', {
            is: (value: string) => !Boolean(value),
            then: Yup.string().required("Required")
        }),
    })
})

but i am getting this error:

No overload matches this call.
  Overload 1 of 4, '(keys: string | string[], builder: ConditionBuilder<StringSchema<string, AnyObject, undefined, "">>): StringSchema<string, AnyObject, undefined, "">', gave the following error.
    Argument of type '{ is: (value: string) => boolean; then: Yup.StringSchema<string, Yup.AnyObject, undefined, "">; }' is not assignable to parameter of type 'ConditionBuilder<StringSchema<string, AnyObject, undefined, "">>'.
      Object literal may only specify known properties, and 'is' does not exist in type 'ConditionBuilder<StringSchema<string, AnyObject, undefined, "">>'.
  Overload 2 of 4, '(keys: string | string[], options: ConditionConfig<StringSchema<string, AnyObject, undefined, "">>): StringSchema<string, AnyObject, undefined, "">', gave the following error.
    Type 'StringSchema<string, AnyObject, undefined, "">' is not assignable to type '(schema: StringSchema<string, AnyObject, undefined, "">) => ISchema<any, any, any, any>'.
      Type 'StringSchema<string, AnyObject, undefined, "">' provides no match for the signature '(schema: StringSchema<string, AnyObject, undefined, "">): ISchema<any, any, any, any>'.ts(2769)
index.d.ts(296, 5): The expected type comes from property 'then' which is declared here on type 'ConditionConfig<StringSchema<string, AnyObject, undefined, "">>'


async handling in Go, unintuitive behavior with goroutines & context

Background

I am working on a server and decided to move away from traditional async processing for long running requests (pub/sub, etc.) by using goroutines and context. My idea was to take a request and kick off a goroutine with a new timeout context that will complete the processing regardless of if the initial request context is cancelled (i.e. user refreshes).

I had done this before on a single endpoint but wanted to make a generalized reusable wrapper this time that I could give a timeout and put an async chunk of code in (shown below is working code).

type ContextSplitter struct {
    InitialCtx context.Context
    Timeout    time.Duration
}

func NewContextSplitter(timeout time.Duration) *ContextSplitter {
    return &ContextSplitter{
        InitialCtx: context.Background(),
        Timeout:    timeout,
    }
}

func (c *ContextSplitter) Do(worker func(ctx context.Context) error) error {
    var wg sync.WaitGroup
    errs := make(chan error, 1)
    newCtx, cancel := context.WithTimeout(context.Background(), c.Timeout)

    defer cancel()

    wg.Add(1)

    // run the worker
    go func(ctx context.Context) {
        defer wg.Done()
        defer func() {
            if r := recover(); r != nil {
                // if the worker panics, send the panic error to the errs channel
                errs <- fmt.Errorf("worker panic: %v", r)
            }
        }()

        // call the worker function and send any returned errors to the errs channel
        errs <- worker(ctx)
    }(newCtx)

    // create a sync.Once object to ensure that the done channel is only closed once
    doneOnce := sync.Once{}
    done := make(chan bool, 1)

    // run a routine to listen for when the worker finishes executing
    go func() {
        wg.Wait()
        done <- true
        doneOnce.Do(func() {
            close(errs)
            close(done)
        })
    }()

    select {
    case <-c.InitialCtx.Done():
        // initial context cancelled, continue processing in background
        return c.InitialCtx.Err()
    case <-done:
        // continue
    }

    // collect any errors that occurred during execution and return them
    var err error
    for e := range errs {
        if e != nil {
            err = multierr.Append(err, e)
        }
    }

    return err
}

Which is used as so

err := NewContextSplitter(time.Minute*5).Do(func(newCtx context.Context) error {
    // do some long-running tasks, including propagating the newCtx
    obj, err := doStuff(newCtx, stuff)
}

Question

I finally got it working but I am making this post because I am not completely sure why it works and am looking for some insight into the inner workings of golang, goroutines, & context.

The main fix ended up being the removal of initial (request) context from NewContextSplitter(), i.e. this

func NewContextSplitter(initialCtx context.Context, timeout time.Duration) *ContextSplitter {
    return &ContextSplitter{
        InitialCtx: initialCtx,
        Timeout:    timeout,
    }
}

to this

func NewContextSplitter(timeout time.Duration) *ContextSplitter {
    return &ContextSplitter{
        InitialCtx: context.Background(),
        Timeout:    timeout,
    }
}

Basically, the request context would get cancelled and any functions in my worker that took the new (timeout) context would fail with ErrContextCancelled. I am thinking that the initial context cancellation got propagated to my Do(worker) and the defer cancel() would get called, thereby cancelling my new timeout context. The strange thing is that the worker would not exit via the newCtx cancellation from the defer, but continue running with a bunk context and err out on ErrContextCancelled.

My main questions are:

  • Why would passing initialCtx to NewContextSplitter allow for the propagation of cancelling that context to Do(worker), since Do() does not take that context?
  • Why does the worker continue processing with a cancelled context?
  • Am I missing anything else / any suggestions?
  • Is this a valid pattern for handling long running tasks?

Let me know if I can provide any more context (ha)



2023-04-26

AWS ECS Service Discovery Cors Issue?

I am implementing a full stack app on ECS with Service Discovery Enabled. The infrastructure consists of a backend which talks to a mysql RDS database and a frontend with VueJS which makes request to the nodejs backend. I am running into a cors issue when i make a request from the frontend.To reproduce the issue :

  1. Clone https://github.com/FIAF0624/backend-infra

  2. terraform init

  3. terraform plan

  4. terraform apply

  5. When the mysql instance is up and running, connect to the database using mysql -h <endpoint> -u admin -padmin123 and run the following query :

    DROP DATABASE IF EXISTS customer;

    CREATE DATABASE customer; USE customer; DROP TABLE IF EXISTS books;

    CREATE TABLE books ( id int NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, price int NOT NULL, PRIMARY KEY (id) );

    INSERT INTO books VALUES (1,'Basic Economics', 20),(2,'Harry Potter', 15),(3,'Physics', 17); Select * from books

  6. Clone https://github.com/FIAF0624/frontend-infra

  7. terraform init

  8. terraform plan

  9. terraform apply

  10. Go to ECS -> rolling-ls-cluster -> Tasks -> Enter the Public IP in browser and click on the button.

  11. Check the browser console ( Right click -> Inspect -> Console ) and you will see the following error.

Access to XMLHttpRequest at 'api.example.terraform.com:3300/mysql' from origin 'http://<public-IP>' has been blocked by CORS policy: Cross origin requests are only supported for protocol schemes: http, data, isolated-app, chrome-extension, chrome, https, chrome-untrusted.

GET api.example.terraform.com:3300/mysql net::ERR_FAILED
Uncaught (in promise) rt {message: 'Network Error', name: 'AxiosError', code: 'ERR_NETWORK', config: {…}, request: XMLHttpRequest, …}

However if i create an EC2 instance and make a request using curl api.example.terraform.com:3300/mysql, i get the response back.

This is my NodeJS backend : https://github.com/FIAF0624/backend/blob/main/server.js and i have allowed cors as well explicitly adding the headers. The Security group allows all traffic from anywhere. Not sure why i am running into this cors issue. The app runs locally for me just fine. I am not sure where the issue exactly is. Any help will be appreciated.

This is the frontend repo : https://github.com/FIAF0624/frontend/blob/a2e89010d3dfaa75caecfb8d656723abe2ff3456/src/components/Dashboard.vue#L29 where i am using Axios to make a request.



Something's wrong with react 18 useEffect. React17 useEffect functions well [closed]

https://codesandbox.io/p/sandbox/recursing-goldwasser-fde16r?selection=%5B%7B%22endColumn%22%3A37%2C%22endLineNumber%22%3A4%2C%22startColumn%22%3A37%2C%22startLineNumber%22%3A4%7D%5D&file=%2Fpages%2Findex.js

Referring to the above codesandbox, the scrollup menu works well in react 17 but the transition does not appear in react 18. Problem is possibly with useEffect. What is the problem?

I tried on React 17 and React 18. In react 17 the scroll up transition display correctly, but in react 18 the scroll up transition is missing.

https://codesandbox.io/p/sandbox/dreamy-resonance-hdosok This is the problem in react 18. Is there any workaround to change back the behaviour of react 17 or how to keep the component structure while making the behaviour of react 17?

https://github.com/websummernow/ReactBehaviours



Cannot find AwaitableTriggerDagRunOperator anymore for Airflow [Python]

I'm working on a python project using Airflow. In the project there is no requirements.txt file so I simply installed the latest version of the libraries by putting their name inside a requirement.txt file and I've been trying to make it work.

The import which is causing trouble is this one:

from airflow_common.operators.awaitable_trigger_dag_run_operator import (
    AwaitableTriggerDagRunOperator,
)

Looking online for AwaitableTriggerDagRunOperator I cannot find any documentation about this operator, the only result that comes up is this page where another person is using it and this person is importing it in the same way I am.

I guess that the project was developed with a very old version of Airflow and things have changed quite a bit. Here is the version that I have installed.

$ pip freeze | awk '/airflow/'
airflow-commons==0.0.67
apache-airflow==2.5.3
apache-airflow-providers-cncf-kubernetes==6.0.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-sqlite==3.3.1


Send data from single inputs.exec to multiple outputs.kafka in telegraf

I have data coming to telegraf and it has multiple fields. I want to be able to have one inputs.exec plugin and multiple outputs.kafka plugin to send each field to respective kafka topics.

My data is in this format:

{
    "field1": [
        {
            "abc" : 0,
            "efg" : 1,
            "hij" : 4,
            "jkl" : 5
        }
    ],
    "field2": [
        {
            host : "admin1",
            timestamp: 1682314679774
        },
        {
            host : "admin2",
            timestamp: 1682314679775
        },
        {
            host : "admin3",
            timestamp: 1682314679773
        }
    ]
}

I want to send field1 to "field1" kafka topic and field2 to "field2" kafka topic. My expected output is:

In field1 kafka topic: { "abc" : 0, "efg" : 1, "hij" : 4, "jkl" : 5 }

In field2 kafka topic: { host : "admin1", timestamp: 1682314679774 }, { host : "admin2", timestamp: 1682314679775 }, { host : "admin3", timestamp: 1682314679773 }

How do i implement this in telegraf configuration file?

I tried writing the telegraf conf file in this format. But it is not working.

[[outputs.kafka]]
    brokers = ["admin:9091"]
topic = "field1"
data_format = "json"
flush_interval = "1m"

[[outputs.kafka]]
    brokers = ["admin:9092"]
## Kafka topic for producer messages
topic = "field2"
data_format = "json"
flush_interval = "1m"

[[inputs.exec]]
interval = "1m"
commands = ["/opt/clustertest/bin/script.py"]
timeout = "10s" # I want the script to execute every ten seconds
data_format = "json"
flush_interval = "1m"
json_query = "{'field1': [], 'field2': []}"

Can anyone please help?



is there a better dynamic way to replace if any digit except 0 exists using sed/awk

Team, my output is below and I want to replace all non-zeros after comma to boolean 1.

DA:74,0
DA:75,0
DA:79,3 < NON BOOLEAN
DA:80,3 < NON BOOLEAN
DA:81,3 < NON BOOLEAN
DA:82,4 < NON BOOLEAN
DA:83,1

so I did this sed 's/,[1-999]/,1/g'

but I can't be increasing 999 to 9999 kind of manual. is there a dynamic way that if no 0 then replace with one. 1

expected output

DA:74,0
DA:75,0
DA:79,1
DA:80,1
DA:81,1
DA:82,1
DA:83,1

possible values are any combinations like

DA:108,23
DA:110,101
DA:111,098
DA:112,100

all above should be replaced by 1. so only case it should replace is when there is single digit 0 or single digit 1.

so any non-zero number with a comma before should be replaced.



How to change marker size based on the values of multiple columns

I need to do a scatter plot, where the the size of each dot is determined by the quantity of xy.

Sample data is Student Performance Data Set: student-mat.csv

import pandas as pd
import seaborn as sns

data =\
{'traveltime': [2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 3, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 3, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 1, 3, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 1, 1, 1, 2, 1, 2, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 2, 2, 3, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 3, 1, 1, 4, 1, 3, 2, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 4, 1, 1, 2, 1, 1, 1, 1, 3, 3, 1, 2, 2, 2, 1, 4, 2, 1, 1, 1, 1, 3, 2, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 3, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 2, 1, 1, 1, 1, 4, 2, 1, 2, 1, 1, 2, 2, 1, 1, 3, 1, 2, 2, 1, 1, 2, 3, 2, 1, 1, 1, 2, 3, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 1, 2, 1, 2, 2, 2, 2, 1, 2, 2, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 2, 2, 1, 1, 1, 2, 2, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 4, 1, 2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 2, 1, 3, 2, 1, 2, 2, 2, 2, 3, 2, 2, 1, 2, 2, 2, 3, 2, 3, 2, 3, 1, 1, 2, 4, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 1, 1, 2, 1, 2, 1, 3, 1],
'G2': [6, 5, 8, 14, 10, 15, 12, 5, 18, 15, 8, 12, 14, 10, 16, 14, 14, 10, 5, 10, 14, 15, 15, 13, 9, 9, 12, 16, 11, 12, 11, 16, 16, 10, 14, 7, 16, 16, 12, 13, 10, 12, 18, 8, 10, 8, 12, 19, 15, 7, 13, 13, 11, 10, 13, 9, 15, 15, 10, 16, 11, 8, 10, 9, 10, 15, 13, 7, 9, 16, 15, 10, 6, 12, 12, 9, 11, 11, 8, 5, 12, 10, 6, 15, 10, 9, 7, 14, 10, 6, 7, 17, 6, 10, 13, 10, 15, 9, 14, 9, 7, 17, 13, 6, 18, 11, 8, 18, 13, 15, 19, 10, 13, 19, 9, 15, 13, 14, 7, 13, 15, 14, 13, 11, 7, 13, 10, 8, 4, 18, 0, 0, 13, 11, 0, 0, 0, 0, 12, 16, 9, 9, 11, 14, 0, 11, 7, 11, 6, 9, 5, 13, 10, 0, 11, 8, 12, 8, 15, 12, 6, 9, 0, 10, 8, 11, 10, 15, 7, 14, 5, 15, 11, 7, 11, 9, 13, 5, 8, 10, 8, 13, 17, 9, 13, 12, 12, 15, 7, 9, 12, 8, 8, 9, 14, 15, 15, 9, 18, 9, 16, 10, 9, 6, 10, 9, 7, 12, 9, 7, 8, 12, 13, 7, 10, 15, 6, 6, 7, 10, 6, 5, 16, 13, 13, 8, 15, 11, 8, 10, 13, 11, 9, 13, 7, 9, 13, 12, 11, 7, 12, 11, 0, 12, 0, 18, 12, 8, 5, 15, 8, 10, 9, 9, 12, 9, 12, 11, 14, 9, 18, 8, 12, 9, 10, 17, 9, 10, 9, 0, 9, 14, 11, 14, 10, 12, 9, 9, 8, 11, 8, 9, 12, 9, 9, 10, 18, 12, 14, 13, 11, 15, 12, 18, 13, 12, 9, 8, 13, 15, 10, 11, 12, 17, 14, 12, 18, 9, 12, 10, 9, 12, 11, 10, 13, 11, 8, 10, 11, 11, 13, 9, 11, 14, 15, 12, 15, 10, 9, 14, 8, 14, 0, 8, 9, 15, 13, 8, 15, 10, 12, 10, 15, 8, 10, 13, 15, 10, 15, 13, 7, 13, 7, 8, 11, 9, 13, 12, 10, 16, 13, 12, 11, 15, 11, 10, 13, 6, 10, 12, 7, 12, 11, 5, 18, 8, 14, 9, 15, 10, 14, 6, 11, 5, 5, 9, 5, 5, 9, 5, 9, 16, 8, 12, 9]}

df = pd.DataFrame(data=data)

sns.scatterplot(data=df, x="traveltime", y="G2", s=100)

enter image description here

This code changes the sizes of all dots.

I've tried setting size to xy values, but it didn't work



2023-04-25

How to visualise a changed pixmap in QIcon without resetting it on the widget?

Now that dark mode has finally come to Windows with Qt 6.5, I noticed that a lot of my icons don't look too well on a dark background. So I'd like to use different icons for light and dark mode. And, to make things difficult, have the icons change (their appearance) when the user switches mode on his OS.

In order to avoid having to setIcon() on all kinds of widgets all over the place, I thought I'd subclass QIcon and have it change its pixmap on the colorSchemeChanged signal.

class ThemeAwareIcon(QIcon):
    def __init__(self, dark_pixmap, light_pixmap, *args):
        super().__init__(*args)
        self.dark_pm = dark_pixmap
        self.light_pm = light_pixmap
        self.app = QApplication.instance()
        self.app.styleHints().colorSchemeChanged.connect(self.set_appropriate_pixmap)
        self.set_appropriate_pixmap()

    def set_appropriate_pixmap(self):
        current_scheme = self.app.styleHints().colorScheme()
        pm = self.dark_pm if current_scheme == Qt.ColorScheme.Dark else self.light_pm
        self.addPixmap(pm, QIcon.Mode.Normal, QIcon.State.On)

This works almost as intended; pixmaps are changed upon the signal. It's just that the changed pixmap isn't displayed on the widget that the icon is set on. The only way I found to make the change visible is to reset the icon on the widget, and that is what I was trying to avoid in the first place.

So, can my icon class be rescued somehow or is what I want just not possible this way?



How to activate a setter method without input in the main program code?

I'm working on a program that needs to check if a value is an integer before applying it to an attribute of a class. This must be done in a setter ("set") method.

internal class Dummy {
    private int number;
            
    public int Number {
        get { return number; }
        set { Console.Write("Input number: "); if (int.TryParse(Console.ReadLine(), out value) == true) { number = value; Console.WriteLine("The number is " + number + "."); } else { Console.WriteLine("That is not a number."); } }
    }
}

I tried to make the input in the main program a string (objectName.Number = Console.ReadLine();) instead of an int (objectName.Number = Convert.ToInt32(Console.ReadLine()):), but this obviously gave me an error.

I've then tried to just make the input an int, but use a string input when running the program. This would lead to a crash.

The only way I could make this work was for the user to enter a "useless" number (for the first "main program" input to be discarded) and then input the right one when prompted with "Input number: "; but this "solution" looks and feels clunky.

I'm open to different ways of doing this, but remember, the setter absolutely must be the code that checks whether the value is an int (and then apply it accordingly).



Bigquery Concurrently Performance

We have data size of around 100GB in BQ, we made the required partition and clustering in our table. So all our queries are simple select only queries with order by clause and having the required partition and cluster filter.

We want to maintain high concurrency (around 1000) and latency of under 1 sec, Is big query the right fit for this ?

Currently the query performance is good but only their google doc they say 100 limit on concurrent queries ?

BI Engine is good fit here ?



Getting undefined (reading 'getLogger') with Msal Provider React TS + Vite

I'm trying to use the ms-identity-javascript-react-tutorial to manage users (sign-in/up). This is guide I'm following https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api/SPA but I think since I'm using TypeScript theres some package or something that I'm doing wrong to not initialize the "intance" correctly because it's not getting the func getLogger.

My AuthConfig is:

/*
 * Copyright (c) Microsoft Corporation. All rights reserved.
 * Licensed under the MIT License.
 */

import { LogLevel } from "@azure/msal-browser";

/**
 * Configuration object to be passed to MSAL instance on creation.
 * For a full list of MSAL.js configuration parameters, visit:
 * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
 */
export const msalConfig = {
    auth: {
        clientId: "Enter_the_Application_Id_Here", // This is the ONLY mandatory field that you need to supply.
        authority: "https://login.microsoftonline.com/Enter_the_Tenant_Info_Here", // Defaults to "https://login.microsoftonline.com/common"
        redirectUri: "/", // You must register this URI on Azure Portal/App Registration. Defaults to window.location.origin
        postLogoutRedirectUri: "/", // Indicates the page to navigate after logout.
        clientCapabilities: ["CP1"] // this lets the resource owner know that this client is capable of handling claims challenge.
    },
    cache: {
        cacheLocation: "localStorage", // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO between tabs.
        storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
        secureCookies: false
    },
    system: {
        loggerOptions: {
            /**
             * Below you can configure MSAL.js logs. For more information, visit:
             * https://docs.microsoft.com/azure/active-directory/develop/msal-logging-js
             */
            loggerCallback: (level: any, message: any, containsPii: any) => {
                if (containsPii) {
                    return;
                }
                switch (level) {
                    case LogLevel.Error:
                        console.error(message);
                        return;
                    case LogLevel.Info:
                        console.info(message);
                        return;
                    case LogLevel.Verbose:
                        console.debug(message);
                        return;
                    case LogLevel.Warning:
                        console.warn(message);
                        return;
                }
            }
        }
    }
};

/**
 * Add here the endpoints and scopes when obtaining an access token for protected web APIs. For more information, see:
 * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/resources-and-scopes.md
 */
export const protectedResources = {
    apiTodoList: {
        endpoint: "http://localhost:5000/api/todolist",
        scopes: {
            read: [ "api://Enter_the_Web_Api_Application_Id_Here/Todolist.Read" ],
            write: [ "api://Enter_the_Web_Api_Application_Id_Here/Todolist.ReadWrite" ]
        }
    }
}

/**
 * Scopes you add here will be prompted for user consent during sign-in.
 * By default, MSAL.js will add OIDC scopes (openid, profile, email) to any login request.
 * For more information about OIDC scopes, visit:
 * https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent#openid-connect-scopes
 */
export const loginRequest = {
    scopes: [...protectedResources.apiTodoList.scopes.read, ...protectedResources.apiTodoList.scopes.write]
};

My Main.ts

import React from 'react'
import ReactDOM from 'react-dom/client'
import { EventType, PublicClientApplication } from '@azure/msal-browser';

import App from './App'
import './index.css'
import { msalConfig } from './utils/auth/AuthConfig';

const msalInstance = new PublicClientApplication(msalConfig);

// Default to using the first account if no account is active on page load
if (!msalInstance.getActiveAccount() && msalInstance.getAllAccounts().length > 0) {
  // Account selection logic is app dependent. Adjust as needed for different use cases.
  msalInstance.setActiveAccount(msalInstance.getAllAccounts()[0]);
}

msalInstance.addEventCallback((event: any) => {
  if (event.eventType === EventType.LOGIN_SUCCESS && event.payload.account) {
      const account = event.payload.account;
      msalInstance.setActiveAccount(account);
  }
});


ReactDOM.createRoot(document.getElementById('root') as HTMLElement).render(
  <React.StrictMode>
    <App msalInstance={msalInstance}/>
  </React.StrictMode>,
)

Console output:

caught TypeError: Cannot read properties of undefined (reading 'getLogger')
    at MsalProvider.tsx:107:25
    at mountMemo (react-dom.development.js:17225:19)
    at Object.useMemo (react-dom.development.js:17670:16)
    at useMemo (react.development.js:1650:21)
    at MsalProvider (MsalProvider.tsx:106:20)
    at renderWithHooks (react-dom.development.js:16305:18)
    at mountIndeterminateComponent (react-dom.development.js:20074:13)
    at beginWork (react-dom.development.js:21587:16)
    at HTMLUnknownElement.callCallback2 (react-dom.development.js:4164:14)
    at Object.invokeGuardedCallbackDev (react-dom.development.js:4213:16)
react-dom.development.js:18687 The above error occurred in the <MsalProvider> component:

    at MsalProvider (http://localhost:3011/node_modules/.vite/deps/@azure_msal-react.js?v=f9cbc1be:123:5)
    at App (http://localhost:3011/src/App.tsx?t=1681825109441:19:16)

Consider adding an error boundary to your tree to customize error handling behavior.
Visit https://reactjs.org/link/error-boundaries to learn more about error boundaries.
logCapturedError @ react-dom.development.js:18687
react-dom.development.js:26923 Uncaught TypeError: Cannot read properties of undefined (reading 'getLogger')
    at MsalProvider.tsx:107:25
    at mountMemo (react-dom.development.js:17225:19)
    at Object.useMemo (react-dom.development.js:17670:16)
    at useMemo (react.development.js:1650:21)
    at MsalProvider (MsalProvider.tsx:106:20)
    at renderWithHooks (react-dom.development.js:16305:18)
    at mountIndeterminateComponent (react-dom.development.js:20074:13)
    at beginWork (react-dom.development.js:21587:16)
    at beginWork$1 (react-dom.development.js:27426:14)
    at performUnitOfWork (react-dom.development.js:26557:12)

Project Dependencies:

"dependencies": {
    "@auth0/auth0-react": "^2.0.1",
    "@azure/msal-browser": "^2.35.0",
    "@azure/msal-react": "^1.5.3",
    "@emotion/styled": "^11.10.6",
    "@heroicons/react": "^2.0.17",
    "@react-google-maps/api": "^2.18.1",
    "@sentry/react": "^7.45.0",
    "@sentry/tracing": "^7.45.0",
    "@testing-library/jest-dom": "^5.16.5",
    "@testing-library/react": "^13.4.0",
    "@testing-library/user-event": "^13.5.0",
    "@vitejs/plugin-react-swc": "^3.3.0",
    "axios": "^1.3.4",
    "clsx": "^1.2.1",
    "eslint-plugin-prettier": "^4.2.1",
    "jwt-decode": "^3.1.2",
    "msal": "^1.4.17",
    "react-datepicker": "^4.11.0",
    "react-input-mask": "^2.0.4",
    "uuid": "^9.0.0",
    "zxcvbn": "^4.4.2"
  },

I've tried googling but not finding the sameish problem with MSAL provider that I can see I'm doing wrong. Followed several interpertations but no luck.



Address of a structure variable (in C)

When I declare a structure below

struct {
 int a;
 int b;
} x;

&x is the address of the structure, right? But as far as I'm concerned, a structure is a collection of data, so I don't really understand what specific address &x points to. When you have an array, let's say,

char a[] = "This is a test";

*a points to the first element of the array a[], which is a[0]. What about the case above?



Spring - rest template authenticate each request with a custom jwt

I have a spring boot app where I need to query an external api which is protected by the bearer token.

First i need to query the auth api for the jwt token like

POST https://some-external-api.com/api/auth/signin
{
    "username": "MyApp",
    "password": "PASSWORD"
}

I receive a response like:

{
    "access_token": "eyJ....",
    "token": "eyJ....",
    "validFrom": "2023-04-21T09:16:50.000Z",
    "validTo": "2023-04-28T09:16:50.000Z",
    "tokenType": "bearer",
    "expires": "2023-04-28T09:16:50.000Z",
    "token_type": "bearer"
}

where token and access_token fields contain the same jwt token with a payload that looks like

{
  "unique_name": "MyApp",
  "role": [
    "Reader",
    "MyApp"
  ],
  "nbf": 1682068610,
  "exp": 1682673410,
  "iat": 1682068610
}

Then I am adding this jwt token to every request using a rest template interceptor. I'd like to ask what's the best way to manage this token in spring - I don't want to implement my own token storage etc. I'd like to use some ready solution.

In my app I have a similar code where the api is protected by the oauth2 and I use something like

public class Oauth2AuthInterceptor implements ClientHttpRequestInterceptor {

    private final ClientRegistration clientRegistration;
    private final OAuth2AuthorizedClientManager manager;

    @Override
    public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException {
        final OAuth2AuthorizeRequest oAuth2AuthorizeRequest = OAuth2AuthorizeRequest
            .withClientRegistrationId(clientRegistration.getRegistrationId())
            .principal("myAppAuth")
            .build();
        final OAuth2AuthorizedClient client = manager.authorize(oAuth2AuthorizeRequest);
        if (isNull(client)) {
            throw new IllegalStateException("client credentials flow on " + clientRegistration.getRegistrationId() + " failed, client is null");
        }
        request.getHeaders().add(HttpHeaders.AUTHORIZATION, "bearer " + client.getAccessToken().getTokenValue());
        return execution.execute(request, body);
    }

Is it possible to customize this default oauth2 mechanism to be able to reuse it with my custom jwt auth endpoint ?



2023-04-24

Am I creating a memory leak and how to fix it

I am asking because I am creating an initialArray, but then I point it to a new array without freeing the initially allocated space. I tried to do free(initialArray) before pointing it to my newArray, so I will free up the previously used array - it'll be like:

free(initialArray);
initialArray = newArray;

but I am getting a core dump. any help?

#include <stdio.h>
#include <stdlib.h>

#define ENLARGE_SIZE(x, y) x += y
#define SIZE_INCREMENT 10

int *get_set();

int main() {
    int i = 0;
    int *set = get_set();
    printf("Array elements:\n");
    while (*(set + i) != '\0') {
        printf("%d,%d\n", *(set + i), i);
        i++;
    }
    free(set);
    return 1;
}

int *get_set() {
    int *initialArray = malloc(sizeof(int) * SIZE_INCREMENT);
    int arraySize = 5;
    int arrayElementCount = 0;
    int scannedInt;
    int i = 0;
    while (scanf("%d", &scannedInt) != EOF) {
        printf("Scanned %d\n", scannedInt);
        arrayElementCount++;
        if (arraySize == arrayElementCount) {
            int *newArray = realloc(initialArray, sizeof(int) * (ENLARGE_SIZE(arraySize, SIZE_INCREMENT)));
            initialArray = newArray;
            arraySize += SIZE_INCREMENT;
        }
        *(initialArray + i) = scannedInt;
        i++;
    }
    return initialArray;
}


Remove blank line when displaying MultiIndex Python dataframe

I have a MultiIndex Python Dataframe displaying like this:

enter image description here

What can I do to pull up the index names ("index1", "index2") and delete the empty row above the dataframe values to get a view like this? The goal is to export the dataframe in png.

enter image description here

Here is the code that generates the first dataframe:

import pandas as pd
import numpy as np

style_valeurs = {"background-color" : "white", "color" : "black", "border-color" : "black",  "border-style" : "solid", "border-width" : "1px"}

style_col = [{"selector": "thead",
            "props": "background-color:yellow; color :black; border:3px black;border-style: solid; border-width: 1px"
            },{
            'selector':"th:not(.index_name)",
            'props':"background-color: yellow; border-color: black;border-style: solid ;border-width: 1px; text-align:left"
}]

data = np.random.rand(4,4)

columns = pd.MultiIndex.from_product([["A","B"],["C","D"]])

index = pd.MultiIndex.from_product([["x","y"],["1","2"]])

df = pd.DataFrame(data, columns = columns, index = index)

df.rename_axis(["index1","index2"], inplace = True)

df.style.set_properties(**style_valeurs).set_table_styles(style_col)

Thank you for your help!



Android TTS (Text-To-Speech) doesn't pronounce isolated word correctly

The TTS pronounces "az-zumaru" instead of "az-zumar" when passed the following Arabic word (title of a Sura): ٱلزُّÙ…َر

Any suggestions what to do to produce the correct speech? Is there some unicode character that will tell the TTS engine that "no Tanween is needed here" (for example)? My guess is it is adding a tanween when it should not, but I'm not sure as I'm not an expert in this area.

I've tried different spellings (without diacritics) from https://en.wikipedia.org/wiki/Az-Zumar but it seems to pronounce it with a trailing "u" sound in all cases.

If part of a longer sentence, the word appears to be pronounced correctly.

I've also tried different android voices (e.g., ar-xa-x-arz-local, ar-xa-x-arc-local, etc.) and they all seem to add a trailing sound for single words.



PCI Bios 2.1 Question - How to set device interrupt

I am in hopes someone with PCI programming experience would lend me some advice.

I own a piece of test equipment (Logic Analyzer) which uses an old Pentium class (circa '97) motherboard running Win98 This motherboard appears to be 'hard-configured' such that configuration through the bios is limited. There are only a couple of PCI devices which are embedded on the motherboard. However, there is an unpopulated PCI slot that I am attempting to get up and running. I managed to solder in a PCI connector and soldered several jumpers so that the slot is working fine from a hardware standpoint. I've installed a PCI LAN board which the system recognizes and allocates resources. The problem is the BIOS is not assigning an interrupt. One of the limitations of the BIOS configuration is that there is no way to assign interrupts and the BIOS is not doing that automatically. A BIOS update is not happening.

My idea as to a solution was to write a small application that will write to the 82371 South Bridge part that controls the IRQ routing. This appears to work, but the OS/Bios is reverting to the original. Specifically, when reading from the 82371 router config registers:

` A# - IRQ 11   
  B# - IRQ-NONE  
  C# - IRQ-NONE  
  D# - IRQ 9`

I've verified that there exists no routing table and IRQ 11/9 are the only PCI interrupts. I should note that the LAN card is a single function device and the A# int output is physically routed to the router B# input (as you would expect). After writing to the 82371 router registers, B# - IRQ 10 (10 is not used in my system and is available), I read back the registers and it indicates a correct write. After a brief period of time someone changes it back to B# - IRQ-NONE. I am successful in setting the interrupt controller and setting the interrupts to be level-triggered. These changes are permanent until the next boot.

So, does anyone have any suggestions as to forcing the PCI slot to be assigned IRQ-10 to the router B# input?

I see in the PCI Bios Rev 2.1 spec a function for setting the PCI Hardware Interrupt (Sec 4.2.3) and I was going to try this but I'm confused about how to call this function. Below is a copy/paste of the relevant section on using this function. Not being fluent in assembly language may be part of the issue. I understand the concept of segment:offset but the description below speaks of setting the DS segment so that the physical address should resolve to 0x0F0000 but there is no mention of which register is used for the offset. I am guessing that BX is used and thus the segment is computed based on what is setup in the BX register.

I would appreciate any input on my questions. Thanks Jim

`  ENTRY:
   [AH] PCI_FUNCTION_ID
   [AL] SET_PCI_HW_INT
   [CL] IntPin parameter. Valid values 0Ah..0Dh
   [CH] IRQNum parameter. Valid values 0..0Fh
   [BX] BusDev parameter. [BH] holds bus number, 
   [BL] holds Device (upper five bits) and Function (lower 3 bits) numbers.
   [DS] Segment or Selector for BIOS data.
        For 16-bit code the real-mode segment or PM selector
        must resolve to physical address 0F0000h and have a limit of 64K.
   EXIT:
   [AH] Return Code:
   SUCCESSFUL
   SET_FAILED
   FUNC_NOT_SUPPORTED
   [CF] Completion Status, set = error, cleared = success`


Trying to pull in a folder inside /opt/lampp/htdocs: Permission denied (publickey). fatal: Could not read from remote repository

Im trying to pull my github project in my folder which is inside in htdocs. It doesnt matter what i do, it always throw the same error and i cant clone or pull from a remote repository.

The error i got is this:

"git@github.com: Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. "

i've tried to make a new ssh key with rsa and added with the following command: ssh-add ~/.ssh/id_rsa Also, added it to my ssh keys in github. Addtionaly, i used the next command: ssh -T git@github.com to check if im using the correct keys.

The result of the last command above was: "Hi alfito69420! You've successfully authenticated, but GitHub does not provide shell access."

Finally, i tried to clone via http and the result was the same, i got the same error "Permission denied (publickey)...".

However, it still throwing the same error and i dont know what else can it be.

Finally, i have to add that if a make pull from other folder like someone in /Desktop theres no problem to pull.



2023-04-23

clang-16: error: no such file or directory: 'Welcome'

Compilation of some R packages with R CMD build or install runs into a common clang error as follows;

clang-16: error: no such file or directory: 'Welcome'
clang-16: error: no such file or directory: 'at'
clang-16: error: no such file or directory: 'Thu'
clang-16: error: no such file or directory: 'Apr'
clang-16: error: no such file or directory: '20'
clang-16: error: no such file or directory: '19:30:11'
clang-16: error: no such file or directory: '2023'

A line, Welcome at Thu Apr 20 19:30:06 2023 appears at the very begging of R CMD build or install output as follows;

R CMD build dada2

Welcome at Thu Apr 20 19:30:06 2023
* checking for file ‘dada2/DESCRIPTION’ ... OK
* preparing ‘dada2’:
* checking DESCRIPTION meta-information ... OK
* cleaning src
* installing the package to build vignettes
      -----------------------------------

Welcome at Thu Apr 20 19:30:07 2023
* installing *source* package ‘dada2’ ...
** using staged installation
** libs

I am using M1 Mac with following;

sw_vers
ProductName:        macOS
ProductVersion:     13.3.1
BuildVersion:       22E261

R.version
               _                           
platform       aarch64-apple-darwin20      
arch           aarch64                     
os             darwin20                    
system         aarch64, darwin20           
status                                     
major          4                           
minor          2.2                         
year           2022                        
month          10                          
day            31                          
svn rev        83211                       
language       R                           
version.string R version 4.2.2 (2022-10-31)
nickname       Innocent and Trusting   

I am not sure this is a M1 Mac specific issue but happens occasionally with M1 Mac I'd really appreciate any pointers to address this issue.



How to implement own format to replace UnityFS .bundle?

I'm using Addressables to build assets. This process creates bunch of .bundle files (in UnityFS format).

I would like to create my own asset bundle format (e.g. IPFS's .car).

Is there any class that I can implement to override the default Unity's archive format which can be used for specific Addressables Group?



Range.find is not finding date from Date object

I have created a script in excel (typescript) that works all fine locally but when I run it in the Power Automate Cloud I receive the following error: Cannot read property 'getOffsetRange' of undefined clientRequestId.

The purpose of the script is to find, in a table with dates, the date corresponding to the first day of the current month and write in the cell next to it.

function main(workbook: ExcelScript.Workbook) {
    // Get the active worksheet.
    let sheet = workbook.getWorksheet("Blad1");
    let escp_tab = sheet.getTable("table_name");

    // takes the day of today
    let date = new Date();
    // get the first day of the month
    let firstDay = new Date(date.getFullYear(), date.getMonth(), 1);
    // format the date in DD/MM/YYYY and makes it a string
    const formattedDate = firstDay.toLocaleDateString('en-GB', {
        day: 'numeric', month: 'numeric', year: 'numeric'
    }).replace(/ /g, '/');

    //sets the range of find to the coulmn Year of the table
    let range = escp_tab.getColumnByName("Year").getRange();
    //find in the specified range the first cell with value formattedDate    
    let cellToFind = range.find(formattedDate, { completeMatch: true });
    //define the cellToFill as the one on the same row and one column to the right of cellToFind
    let cellToFill = cellToFind.getOffsetRange(0,1);
    //sets the value of the cellToFill
    cellToFill.setValue("250");
}

I have already tried to save the excel file again or create a new one from scratch, but none of these two solution seems to be fixing the bug.



Run lapply with lmer function on list of list

I am struggling with launching the lmer function on a data set comprising a list of lists. I have tried converting the list as a data frame too, but it returns an error

lapply(list, function(x) lapply(x, function(x) as.data.frame(x)))

lapply(list, function(x) lapply(x, function(x) 
      lmer(x[,4:7] ~ var + (1| id), x))) 

Error in model.frame.default(data = x, drop.unused.levels = TRUE, formula = x[, : not valid type (list) for the variable 'x'

Can anyone suggest something for running lmer through lapply?

reprex

library(tidyverse)
library(readxl)
require(writexl)
write_xlsx(mtcars, 'PQ_ele_com.xlsx')
        
dir = c('com', 'set', 'rit')
init = c('PQ', 'MG')
inpst = c('_ele_', '_mel_', '_col_')
           
for (i in dir) {
   for (j in init) {
      for (k in inpst){
        write_xlsx(mtcars, paste0(j, k, i, '.xlsx')) 
      }
   }
}
    
files = list.files(recursive= FALSE, full.names= FALSE)
     
init = c('PQ', 'MG')
dir = c('com', 'set', 'rit')
     
list3 = NULL
for (j in init){
   for (k in dir){
      df=c()
      for (i in files){
           if(str_detect(i, pattern = j) & str_detect(i, pattern = k)) {
             df=rbind(df,read_excel(i))}
         }
         list3[[j]][[k]] = df
      }
 }

Let's suppose I would like to fit lmer model through each sublists. For example:

lapply(list3, function(x) lapply(x, 
    function(x) lmer(x[,3:7]  ~ vs + (1| cyl), x))) 

It returns the error reported. Do you think it is possible to coerce it somehow?



Auth0 Python quickstart - validation of signature in ID token

I'm building a Python Flask app with Auth0 as my OpenID Connect provider. The Auth0 website provides some skeleton code, which I'm using as the starting point for my app. The skeleton code works and is easily extensible; however, there are some ambiguities as to what the code is doing and why the behaviour adheres to modern security standards. I can't afford to be less than 100% confident when it comes to security, so I would like to run these ambiguities past the experts here at StackOverflow.

My understanding of the skeleton code

The following is the sequence of events that occur when a user interacts with this skeleton app. (See below for the code.)

  1. The user opens http://localhost:3000/ in the browser. This invokes the home endpoint in the Flask app. At this point in time, the user does not have a session cookie, so the response is some HTML containing the words "Hello guest" and a login button.
  2. The user clicks on the login button, which is a link to http://localhost:3000/login. This invokes the login endpoint in the Flask app, which redirects the user to Auth0's login box.
  3. The user enters their email and username into Auth0's login box. The user is redirected to http://localhost:3000/callback; the authorisation code generated by the successful login is passed in the query string in this URL.
  4. The callback endpoint in the Flask app is invoked. The authorisation code is sent to Auth0 in exchange for an ID token and an access token. A session cookie is set, containing this ID token and access token. The user is redirected to http://localhost:3000/.
  5. The home endpoint is invoked again. This time, the user has a session cookie. The endpoint returns some HTML containing the text Welcome {username}, plus further info about the user contained inside the ID token in the session cookie.

And how exactly is the session cookie constructed? My understanding is that the session cookie contains the ID token and access token, plus a signature. The signature is created using the Flask app's secret key (see line 8 in the code sample below).


Question 1: JWT signature verification

The ID token is a JWT. Being a JWT, the ID token contains a signature. This signature is signed using Auth0's private key, and can be verified using Auth0's public key (also known as the JWK).

Since Auth0 goes to the trouble of signing the ID token, I would expect that any endpoint in our Flask app that uses the ID token as a proof of the user's identity ought to verify the signature on the ID token using Auth0's public key. Otherwise, what's the point in Auth0 signing the ID token?

But the home endpoint uses the ID token as proof of the user's identity, and it does not verify the signature on the ID token! (At least, that's the impression I get by reading the code and ctrl-clicking through the library methods. That said, I'm not too confident in this assertion since the authlib.integrations.flask_client library is not very friendly for ctrl-clicking.)

My questions are:

  • Am I correct that the signature inside the ID token does not get verified by the home endpoint?
  • Assuming I'm correct, then is this a problem? Should I fix it?

Warning: There are two signatures in this setup:

  • The signature inside the ID token, which is signed by Auth0's private key and can be verified using Auth0's public key.
  • The signature on the session cookie, which is signed using the Flask app's secret key.

It is impossible for an attacker to forge a session cookie, because the attacker doesn't have the Flask app's secret key, and so is unable to create a valid signature for the session cookie. So to my naive mind, the app seems secure. Sure, the app fails to verify the signature in the ID token, but it makes up for this by verifying the signature on the session cookie.

Still, I suspect that the signature in the ID token has got to be there for a good reason, presumably to defend against some kind of attack that my non-expert brain hasn't thought of. It's likely that I'm missing something.

Question 2: Expiry time verification

The ID token contains an expiry time. As far as I can tell, the home endpoint doesn't check that the ID tokens hasn't expired.

Again, I'm not 100% sure if I'm right about this. Maybe the session cookie has a concept of an expiry time, which gets checked by Flask? I can't tell because I don't know how to decode the session cookie so that I can inspect it.

My questions are:

  • Is it really the case that the home endpoint doesn't check for expiry?
  • If so, then is this a mistake?

Finally, feel free to correct any misconceptions I have and/or suggest better choice of terminology. I'm not an expert and I'm here to learn.


Code samples:

server.py

import json
from os import environ as env

from authlib.integrations.flask_client import OAuth
from flask import Flask, redirect, render_template, session, url_for
    
app = Flask(__name__)
app.secret_key = env.get("APP_SECRET_KEY")

oauth = OAuth(app)

oauth.register(
    "auth0",
    client_id=env.get("AUTH0_CLIENT_ID"),
    client_secret=env.get("AUTH0_CLIENT_SECRET"),
    client_kwargs={"scope": "openid profile email"},
    server_metadata_url=f'https://{env.get("AUTH0_DOMAIN")}/.well-known/openid-configuration'
)

@app.route("/login")
def login():
    return oauth.auth0.authorize_redirect(
        redirect_uri=url_for("callback", _external=True)
    )

@app.route("/callback", methods=["GET", "POST"])
def callback():
    token = oauth.auth0.authorize_access_token()
    session["user"] = token
    return redirect("/")

@app.route("/")
def home():
    return render_template(
      "home.html",
      session=session.get('user'),
      pretty=json.dumps(session.get('user'), indent=4)
    )

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=env.get("PORT", 3000))

templates/home.html

<html>
<head>
  <meta charset="utf-8" />
  <title>Auth0 Example</title>
</head>
<body>
  
    <h1>Welcome Guest</h1>
    <p><a href="/login">Login</a></p>
  
</body>
</html>


2023-04-22

SpringBoot: JWT Error (Failed to authenticate since the JWT was invalid)

I've been following this tutorial to create a YouTube Clone using Angular+SpringBoot: https://www.youtube.com/watch?v=DW1nQ4o3sCI&t=15427s

What i'm trying to do right is now is to create a user registration endpoint using OAuth, so i can write the user information on my database. However, i'm having some issues with JWT.

Whenever i try to access the endpoint (localhost:8080/api/user/register) i do receive the following error on my console:

[nio-8080-exec-3] o.s.s.o.s.r.a.JwtAuthenticationProvider : Failed to authenticate since the JWT was invalid

Setting some breakpoints and debuggging the application, on SecurityConfig.java the JWT is okay (it returns a valid object). However, when trying to call the endpoint using postman and sending my token as a Bearer token, i do receive the error i sent above. And i can't even debug the UserController ; even setting the breakpoints, when i call the endpoint i don't know what happens but i just receive the error. The only way i can debug trough the controller is if i call using my browse. But when doing so, the authentication is null so it won't work.

UserController.java

package com.pablo.portfolio.youtubeclone.controller;

@RestController
@RequestMapping("/api/user")
@RequiredArgsConstructor
public class UserController {


    private final UserRegistrationService userRegistrationService;
    private final UserService userService;

    @GetMapping("/register")
    @ResponseStatus(HttpStatus.OK)
    public String register(Authentication authentication){

        Jwt jwt = (Jwt)authentication.getPrincipal();
        userRegistrationService.registerUser(jwt.getTokenValue());
        return "User registered successfully";
    }
}

SecurityConfig.java

package com.pablo.portfolio.youtubeclone.config;

@Configuration
@EnableWebSecurity
public class SecurityConfig {
    @Value("${spring.security.oauth2.resourceserver.jwt.issuer.uri}")
    private String issuer;
    @Value("${auth0.audience}")
    private String audience;
    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {

        http
                .csrf()
                .disable().authorizeRequests()
                .requestMatchers("/").permitAll()
                .requestMatchers("/api/videos/","/save-video-details/","/upload-video/","/thumbnail/").authenticated()
                .requestMatchers("/api/private-scoped").hasAuthority("SCOPE_read:messages")
                .and().cors()
                .and().oauth2ResourceServer().jwt();
        return http.build();



    }

    @Bean
    CorsConfigurationSource corsConfigurationSource() {
        CorsConfiguration configuration = new CorsConfiguration();
        configuration.setAllowedOrigins(Collections.singletonList("http://localhost:4200"));
        configuration.setAllowedMethods(Arrays.asList("GET","POST", "PUT", "DELETE", "PATCH", "OPTIONS"));
        configuration.setExposedHeaders(Arrays.asList("Authorization", "content-type"));
        configuration.setAllowedHeaders(Arrays.asList("Authorization", "content-type"));
        UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
        source.registerCorsConfiguration("/**", configuration);
        return source;
    }


    @Bean
    JwtDecoder jwtDecoder(){
        NimbusJwtDecoder jwtDecoder = (NimbusJwtDecoder)
                JwtDecoders.fromOidcIssuerLocation(issuer);

        OAuth2TokenValidator<Jwt> audienceValidator = new AudienceValidator(audience);
        OAuth2TokenValidator<Jwt> withIssuer = JwtValidators.createDefaultWithIssuer(issuer);
        OAuth2TokenValidator<Jwt> withAudience = new DelegatingOAuth2TokenValidator<>(withIssuer, audienceValidator);

        jwtDecoder.setJwtValidator(withAudience);

        return jwtDecoder;
    }
}

Also, i checked the token o jwt.io and its valid. Here's the response:

Header:
{
  "alg": "RS256",
  "typ": "JWT",
  "kid": "7_mQttmzor-VMdGDXMEKN"
}

Payload:
{
  "iss": "https://dev-i63rcznjs255jxgf.us.auth0.com/",
  "sub": "google-oauth2|113855475209558231767",
  "aud": [
    "http://localhost:8080",
    "https://dev-i63rcznjs255jxgf.us.auth0.com/userinfo"
  ],
  "iat": 1681740331,
  "exp": 1681826731,
  "azp": "NJbox1ptzDdadd1Zo9mlBvtkElgge59v",
  "scope": "openid profile email"
}

EDIT: FUll log

2023-04-19T22:07:32.963-03:00 TRACE 1909589 --- [           main] eGlobalAuthenticationAutowiredConfigurer : Eagerly initializing {securityConfig=com.pablo.portfolio.youtubeclone.config.SecurityConfig$$SpringCGLIB$$0@61d2f267}
2023-04-19T22:07:32.964-03:00 DEBUG 1909589 --- [           main] swordEncoderAuthenticationManagerBuilder : No authenticationProviders and no parentAuthenticationManager defined. Returning null.
2023-04-19T22:07:34.768-03:00 DEBUG 1909589 --- [           main] edFilterInvocationSecurityMetadataSource : Adding web access control expression [permitAll] for Mvc [pattern='/']
2023-04-19T22:07:34.780-03:00 DEBUG 1909589 --- [           main] edFilterInvocationSecurityMetadataSource : Adding web access control expression [authenticated] for Mvc [pattern='/api/videos/']
2023-04-19T22:07:34.780-03:00 DEBUG 1909589 --- [           main] edFilterInvocationSecurityMetadataSource : Adding web access control expression [authenticated] for Mvc [pattern='/save-video-details/']
2023-04-19T22:07:34.780-03:00 DEBUG 1909589 --- [           main] edFilterInvocationSecurityMetadataSource : Adding web access control expression [authenticated] for Mvc [pattern='/upload-video/']
2023-04-19T22:07:34.780-03:00 DEBUG 1909589 --- [           main] edFilterInvocationSecurityMetadataSource : Adding web access control expression [authenticated] for Mvc [pattern='/thumbnail/']
2023-04-19T22:07:34.780-03:00 DEBUG 1909589 --- [           main] edFilterInvocationSecurityMetadataSource : Adding web access control expression [hasAuthority('SCOPE_read:messages')] for Mvc [pattern='/api/private-scoped']
2023-04-19T22:07:34.788-03:00 TRACE 1909589 --- [           main] o.s.s.w.a.i.FilterSecurityInterceptor    : Validated configuration attributes
2023-04-19T22:07:34.789-03:00 TRACE 1909589 --- [           main] o.s.s.w.a.i.FilterSecurityInterceptor    : Validated configuration attributes
2023-04-19T22:07:34.794-03:00  INFO 1909589 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@ecf028c, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@7b64bbad, org.springframework.security.web.context.SecurityContextHolderFilter@68f2363, org.springframework.security.web.header.HeaderWriterFilter@d978ab9, org.springframework.web.filter.CorsFilter@71eff6a3, org.springframework.security.web.authentication.logout.LogoutFilter@484302ee, org.springframework.security.oauth2.server.resource.web.authentication.BearerTokenAuthenticationFilter@4cd8db31, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@86377d5, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@87220f1, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@574218f, org.springframework.security.web.access.ExceptionTranslationFilter@d180961, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@35d114f4]
2023-04-19T22:07:34.963-03:00  INFO 1909589 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2023-04-19T22:07:34.972-03:00  INFO 1909589 --- [           main] c.p.p.y.YoutubeCloneApplication          : Started YoutubeCloneApplication in 5.035 seconds (process running for 5.444)
2023-04-19T22:07:56.036-03:00  INFO 1909589 --- [nio-8080-exec-3] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2023-04-19T22:07:56.036-03:00  INFO 1909589 --- [nio-8080-exec-3] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2023-04-19T22:07:56.038-03:00  INFO 1909589 --- [nio-8080-exec-3] o.s.web.servlet.DispatcherServlet        : Completed initialization in 1 ms
2023-04-19T22:07:56.043-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Trying to match request against DefaultSecurityFilterChain [RequestMatcher=any request, Filters=[org.springframework.security.web.session.DisableEncodeUrlFilter@ecf028c, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@7b64bbad, org.springframework.security.web.context.SecurityContextHolderFilter@68f2363, org.springframework.security.web.header.HeaderWriterFilter@d978ab9, org.springframework.web.filter.CorsFilter@71eff6a3, org.springframework.security.web.authentication.logout.LogoutFilter@484302ee, org.springframework.security.oauth2.server.resource.web.authentication.BearerTokenAuthenticationFilter@4cd8db31, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@86377d5, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@87220f1, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@574218f, org.springframework.security.web.access.ExceptionTranslationFilter@d180961, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@35d114f4]] (1/1)
2023-04-19T22:07:56.044-03:00 DEBUG 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Securing GET /api/user/register
2023-04-19T22:07:56.045-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Invoking DisableEncodeUrlFilter (1/12)
2023-04-19T22:07:56.045-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Invoking WebAsyncManagerIntegrationFilter (2/12)
2023-04-19T22:07:56.047-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Invoking SecurityContextHolderFilter (3/12)
2023-04-19T22:07:56.048-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Invoking HeaderWriterFilter (4/12)
2023-04-19T22:07:56.050-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Invoking CorsFilter (5/12)
2023-04-19T22:07:56.053-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Invoking LogoutFilter (6/12)
2023-04-19T22:07:56.053-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.s.w.a.logout.LogoutFilter            : Did not match request to Or [Ant [pattern='/logout', GET], Ant [pattern='/logout', POST], Ant [pattern='/logout', PUT], Ant [pattern='/logout', DELETE]]
2023-04-19T22:07:56.053-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Invoking BearerTokenAuthenticationFilter (7/12)
2023-04-19T22:07:56.055-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.s.authentication.ProviderManager     : Authenticating request with JwtAuthenticationProvider (1/2)
2023-04-19T22:07:56.068-03:00 DEBUG 1909589 --- [nio-8080-exec-3] o.s.s.o.s.r.a.JwtAuthenticationProvider  : Failed to authenticate since the JWT was invalid
2023-04-19T22:07:56.070-03:00 TRACE 1909589 --- [nio-8080-exec-3] .s.r.w.a.BearerTokenAuthenticationFilter : Failed to process authentication request

org.springframework.security.oauth2.server.resource.InvalidBearerTokenException: Unable to validate Jwt
    at org.springframework.security.oauth2.server.resource.authentication.JwtAuthenticationProvider.getJwt(JwtAuthenticationProvider.java:100) ~[spring-security-oauth2-resource-server-6.0.2.jar:6.0.2]
    at org.springframework.security.oauth2.server.resource.authentication.JwtAuthenticationProvider.authenticate(JwtAuthenticationProvider.java:87) ~[spring-security-oauth2-resource-server-6.0.2.jar:6.0.2]
    at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:182) ~[spring-security-core-6.0.2.jar:6.0.2]
    at org.springframework.security.oauth2.server.resource.web.authentication.BearerTokenAuthenticationFilter.doFilterInternal(BearerTokenAuthenticationFilter.java:137) ~[spring-security-oauth2-resource-server-6.0.2.jar:6.0.2]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) ~[spring-security-web-6.0.2.jar:6.0.2]
    at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:352) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:268) ~[spring-web-6.0.7.jar:6.0.7]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.7.jar:6.0.7]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.7.jar:6.0.7]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.0.7.jar:6.0.7]
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.7.jar:6.0.7]
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:166) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:341) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:390) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:894) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-10.1.7.jar:10.1.7]
    at java.base/java.lang.Thread.run(Thread.java:1623) ~[na:na]
Caused by: org.springframework.security.oauth2.jwt.JwtValidationException: Unable to validate Jwt
    at org.springframework.security.oauth2.jwt.NimbusJwtDecoder.validateJwt(NimbusJwtDecoder.java:189) ~[spring-security-oauth2-jose-6.0.2.jar:6.0.2]
    at org.springframework.security.oauth2.jwt.NimbusJwtDecoder.decode(NimbusJwtDecoder.java:138) ~[spring-security-oauth2-jose-6.0.2.jar:6.0.2]
    at org.springframework.security.oauth2.server.resource.authentication.JwtAuthenticationProvider.getJwt(JwtAuthenticationProvider.java:96) ~[spring-security-oauth2-resource-server-6.0.2.jar:6.0.2]
    ... 58 common frames omitted

2023-04-19T22:07:56.075-03:00 TRACE 1909589 --- [nio-8080-exec-3] o.s.s.w.header.writers.HstsHeaderWriter  : Not injecting HSTS header since it did not match request to [Is Secure]