2023-11-30

Pagination from two tables in SQL Server

I have two tables with the following schema:

  • Table A: ColumnA, UserId, ... - rest of the schema omitted for brevity
  • Table B: ColumnB, UserId, ... - rest of the schema omitted for brevity

The tables can have duplicate values between them. For e.g - Table A row (<some-columnA-value>, 1, ...) and Table B row (<some-columnB-value>, 1, ...), 1 being the UserId.

Now, I have an API which is used to fetch all the UserId values from both tables. With increasing data, I want to now use pagination for this API and would like to modify the queries accordingly. There should also not be any duplicates over the pages or within a page.

How do I achieve this? Also a requirement is that I need to use keyset pagination rather than offset pagination since offset pagination gets slower as and when the offset increases.

So far, I have thought of using indexed views since there is only 1 column that I require to fetch but since the data keeps changing quite frequently and in large volumes, the overhead of maintaining the indexed view is not optimal.

Table A:

Column A UserId
x 1
y 2
z 3
w 4

Table B:

Column B UserId
a 1
b 3
c 5
d 6

Result (if no page size):

UserId
1
2
3
4
5
6

Result (if page size 3)

Page 1

UserId
1
2
3

Page 2

UserId
4
5
6


2023-11-29

Null checking with primary constructor in C# 12

I using C# 12. In C# 12 I can use primary constructor:

public class UserService(IUnitOfWork uow) : IUserService
{

}

Before C# 12 I used null checking for items that I inject in constructor:

public class UserService : IUserService
{
    private readonly IUnitOfWork _uow;

    public UserService(IUnitOfWork uow)
    {
        ArgumentNullException.ThrowIfNull(uow);
        _uow = uow;
    }
}

Now how can I do null checking in C# 12 ?
Is it need to use fail fast with primary constructor ?



Django with mypy: How to resolve incompatible types error due to redefined field for custom `User` model class that extends "AbstractUser"?

I have an existing Django project which uses a custom User model class that extends AbstractUser. For various important reasons, we need to redefine the email field as follows:

class User(AbstractUser):
    ...
    email = models.EmailField(db_index=True, blank=True, null=True, unique=True)
    ...

Typing checks via mypy have been recently added. However, when I perform the mypy check, I get the following error:

error: Incompatible types in assignment (expression has type "EmailField[str | int | Combinable | None, str | None]", base class "AbstractUser" defined the type as "EmailField[str | int | Combinable, str]") [assignment]

How can I make it so that mypy allows this type reassignment? I don't wish to just use # type: ignore because I wish to use its type protections.

For context, if I do use # type: ignore, then I get dozens of instances of the following mypy error instead from all over my codebase:

error: Cannot determine type of "email" [has-type]

Here are details of my setup:

python version: 3.10.5
django version: 3.2.19
mypy version: 1.6.1
django-stubs[compatible-mypy] version: 4.2.6
django-stubs-ext version: 4.2.5
typing-extensions version: 4.8.0


2023-11-28

Avalonia window (textboxes) embedded in Autodesk Inventor not accepting input

I'm developing an Autodesk Inventor plug-in and I've chosen to use Avalonia for the UI.

Inventor exposes the ability to create a dockable window. I'm not completely sure how it works behind the scenes but you can add a winforms / WPF control to it, by adding the control's handle as child to the dockable window.

After looking at some samples I figured out how to add the avalonia control to the dockable window.

Everything seems to be working fine, only keypresses are not accepted. (Just backspace & delete) When I run the app from a button press in the ribbon, there are no such problems.

I've found some information on StackOverflow and on the Autodesk forum. I thought the problem might be related to Avalonia so I've used the sample here to embed the avalonia app in a WPF window, thinking this would fix the problem.

It didn't. This thread on the autodesk forum describes the same problem, but for a WPF window.

<Grid>
    <!--WPF input works-->
    <TextBox Text="Text"></TextBox>

    <!--Avalonia input does not work-->
    <interop:WpfAvaloniaHost  x:Name="AvaloniaHost" />
</Grid>

The fix in the autodesk thread:

public void Activate(ApplicationAddInSite addInSiteObject, bool firstTime)
{
    // Setup my WPF Window.
    var wpfWindow = new WpfWindow();
    wpfWindow.WindowStyle = System.Windows.WindowStyle.None;
    wpfWindow.ResizeMode = System.Windows.ResizeMode.NoResize;
    wpfWindow.Visibility = System.Windows.Visibility.Visible;

    // Get WPF Window's handle.
    var helper = new WindowInteropHelper(wpfWindow);
    helper.EnsureHandle();
    var handle = helper.Handle;

    // Create Dockable Window.
    var dockableWindow = InventorApplication.UserInterfaceManager.DockableWindows.Add(System.Guid.NewGuid().ToString(), "Test", "Test");
    dockableWindow.AddChild(handle);

    // Set key hook.
    HwndSource.FromHwnd(handle).AddHook(WndProc);
}

private const UInt32 DLGC_WANTARROWS = 0x0001;
private const UInt32 DLGC_WANTTAB = 0x0002;
private const UInt32 DLGC_WANTALLKEYS = 0x0004;
private const UInt32 DLGC_HASSETSEL = 0x0008;
private const UInt32 DLGC_WANTCHARS = 0x0080;
private const UInt32 WM_GETDLGCODE = 0x0087;

private static IntPtr WndProc(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled)
{
    if (msg == WM_GETDLGCODE)
    {
        handled = true;
        return new IntPtr(DLGC_WANTCHARS | DLGC_WANTARROWS | DLGC_HASSETSEL | DLGC_WANTTAB | DLGC_WANTALLKEYS);
    }
    return IntPtr.Zero;
}

Fixes the problem for the input in WPF textboxes, but not yet for the embedded avalonia window.

This made me conclude that the problem lies elsewhere.

Somehow I need to pass the keypresses to the avalonia controls, but I have no clue how. Does anyone have any experience with this problem? Any advice is greatly appreciated!



2023-11-27

Use a Nonlinear Poisson Regression with two independent variables?

I'm looking for a way to use Nonlinear Regression with Poisson for predictive purposes. I would need something similar to Poisson regression, but with some modifications because:

  • I have two data sets that are made up of randomly placed numbers and have no correlation;
  • The two variables are both independent (and not one dependent and one independent);
  • The purpose of the regression will be to obtain a parameter that can be used in the Poisson distribution to calculate probabilities (explained better later), such as in poisson.pmf(2, regression_result);

Is there something i could use that satisfies the three points above? Any algorithm in some library like scikit learn, scipy, etc? I can't find an algorithm that is useful for my case. I would need something similar to sklearn.linear_model.PoissonRegressor, but for nonlinear regression and for both independent variables

My data are:

Team_A__goal_scored = [1, 3, 1, 2, 2] (x)
Team_B__goal_conceded = [3, 0, 1, 1, 2] (y)

WHAT DO I WANT TO GET? I need to find for the probability of Team A scoring exactly 2 goals to Team B, using Poisson distribution for example poisson.pmf(2, regression_result). As a lambda in the Poisson distribution, i will use the regression result​. I want to use regression for the purpose of relating team A's offense to team B's defense to find an ideal parameter to use in Poisson.

EXPLANATION: Team_A__goal_scored and Team_B__goal_conceded are the data of the last 5 rounds/matches played by the two teams against other opponents (Team A and Team B have never clashed). In the sixth round, the time will come for Team A and Team B to face each other and therefore I want to relate their data and calculate the probability that Team A scores 0 goals, 1 goal, 2 goals, 3 goals, etc. To be precise, i only need exactly 2 goals.

CLEAR SOMEONE'S DOUBT: Someone may say: "Why don't you directly use the Poisson distribution on the arithmetic mean of Team_A__goal_scored?" By doing this, I will have the probability that team A will score a certain number of goals, and it is a fair solution, but it is not what I want, because in this way the goals that team A scores will be calculated only ON THEMSELVES... and not related/influenced by the goals conceded by Team B in previous matches. I want to know how many goals Team A will score against Team B, considering the goals scored by Team A and also the Goals Conceded by Team B, because Team A's attack will be influenced by Team B's defense.

IMPORTANT EXAMPLE TO UNDERSTAND BETTER: I'll explain an example. If I have Team_A__goal_scored = [1, 3, 1, 2, 2] and Team_B__goal_conceded = [4, 5, 6, 5,4], it means Team B concedes a lot of goals.If instead Team_B__goal_conceded is [3, 0, 1, 1, 2], it means that team B concedes fewer goals and it will be more difficult for Team A to score goals for Team B. The goals that Team A will score, it will also be INFLUENCED by the goals that Team B concedes

I would like this final output:

The probability that Team A scores exactly 2 goals against Team B is: x %

UPDATE

I tried Poisson linear regression with one dependent variable and one independent variable. It's something similar to what I'm looking for, this is the one that comes closest to what I'm looking for, but obviously it's not good for the reasons stated above (lack of non-linearity and lack of two independent variables). The problem is that I can't find an algorithm that is useful for my case

import numpy as np
from sklearn.linear_model import PoissonRegressor

Team_A__goal_scored = np.array([1, 3, 1, 2, 2]).reshape(-1, 1)
Team_B__goal_conceded = np.array([3, 0, 1, 1, 2])

#Fit the model
clf = PoissonRegressor()
clf.fit(Team_A__goal_scored, Team_B__goal_conceded)

#Find the prediction (e.g. Team_A__goal_scored = 2)
lambda_pred = clf.predict(np.array([[2]]))[0]

#Poisson probability mass function to find the probability of scoring exactly 2 goals
from scipy.stats import poisson
probability_two_goals = poisson.pmf(2, lambda_pred)

print("Probability that team A scores exactly 2 goals against team B: ", probability_two_goals) 

Thank you



2023-11-26

Excel: Compounding matrix generation with array inputs

Task: To output an array of scalar products based on input array of boolean values and scalars.

Requirements: The solution needs to be formulaic (i.e. contained in one cell) and done without the use of VBA -> the solution needs to be dynamic to accommodate different input arrays.

Input array A (Boolean values)

>= 2023 2024 2025 2026
2023 1 0 0 0
2024 1 1 0 0
2025 1 1 1 0
2026 1 1 1 1

Input array B (Scalar values)

2023 2024 2025 2026
1.25 1 1.2 1.05
1.35 1.1 1 1.2
1.25 1.15 1.05 1.05
1.3 1 1.1 1.15
1.25 1.1 1.4 1.35

Output array (Compounded scalars)

2023 2024 2025 2026
1.25 1.25 1.5 1.575
1.35 1.485 1.485 1.782
1.25 1.4375 1.509375 1.58484375
1.3 1.3 1.43 1.6445
1.25 1.375 1.925 2.59875

In practice: the columns of Output array are comprised of row-wise products of the Input array B. For example the first column is only the 2023 scalars, but the third is 2023 * 2024 * 2025 on each row (i.e. 1.25 * 1 * 1.2, for the first value of the third column). As such, the Output array has the same amout of columns as Input array B, with each column's values being a product of the current and the preceding columns of Input array B.

Note: The format of Input array A is irrelevant in the sense that the boolean values just need to indicate which elements of a particular row are multiplied together -> the array can be changed if needed, but Input array B should remain as provided.

Bruteforce solution: This task can be completed with MAKEARRAY() but it becomes exceedingly inefficient when Input array B has thousands of rows.

Solution found:

enter image description here



2023-11-25

Local host redirecting too many times

I Keep getting this when ever I want to run localhost

This page isn’t working localhost redirected you too many times. Try deleting your cookies. ERR_TOO_MANY_REDIRECTS

I tried deleting third party cookies. What can I do solve this issue. I download snipe-it database but it does not seem to be working.



2023-11-24

Regex, anonymise all matches, per line where there is 1 mandatory match with various optional matches

I've research this quite heavily now but nothing seems to be getting me close

Below is an excerpt of a csv file. I need to anonymise certain lines where there is a match found for an email address. Once a match is found I need to also anonymise other certain fields that might also be present on the same line.

I read about ? making preceding token's optional so though it would be relatively easy to specific an optional group and a mandatory group but I can't it to work.

This is the example data:

test1,rod.p@nono.com,bbb,123456789,987654321,aaa,121
test2,aaa,rod.p@yes.com,123456789,aaa,bbb,987654321,122,rod.p@yes.com,aaa,123456
test3,rod.p@yesyes.com,123456789,987654321,aaa,123

Based on the below syntax, I need the line test2 being matched only and specifically the parts

aaa [optional as long as the email address has been matched on the same line]
bbb [optional as long as the email address has been matched on the same line]
rod.p@yes.com [mandatory]

(please note the email address may appear more than once)

The below syntax will highlight the right parts but will also select the aaa and bbb on the other rows that don't have the correct email address.

(aaa|bbb)?(rod\.p@yes\.com)?

so I realised that I need to define a start and end like ^ and $ but this is when I'm getting stuck and anything I do doesn't make it work.

^(aaa|bbb)?.*(rod\.p@yes\.com).*$

This matches the whole line of test2 (I guess this is because of the '.*') but I need to only match the individual parts so that I can replace them with the word anonymised. I've tried various things but haven't managed to work it working yet. Any guidance would be much appreciated. Thanks.

PS testing this using regexr.com/ with multiline and global flags enabled.



2023-11-23

OSDev -- double buffering rebooting system

Hello I'm trying to make a simple os, and I'm currently trying to do double buffering.
I have two arrays in the size of the screen and a function that copy the array that currently is being used to the memory of the screen and then swap the being used array to the second one.
And there is a function that clear the screen and set it to a specific color.

screen.c

static u8 *BUFFER = (u8*) 0xA0000;

u8 buffers[2][SCREEN_SIZE];
u8 current_buffer = 0;

#define CURRENT     (buffers[current_buffer])
#define SWAP()      (current_buffer = 1 - current_buffer)

void screen_swap(){
    memcpy(BUFFER, CURRENT, SCREEN_SIZE);
    SWAP();
}

void clear_screen(u8 color){                  // set all the memory of the screen to one color
    memset(&CURRENT, color, SCREEN_SIZE);    
}

memory.c

void memset(void* src, u8 val, u32 len){
    u8* ptr = (u8*)src;
    while(len--){
        *ptr++ = val;
    }
}

void* memcpy(void* dst, void* src, u32 len){
    u8 *d = (u8*)dst;
    const u8 *s = (const u8*)src;
    
    while (len-- > 0){
        *d++ = *s++;
    }
    return dst;
}

When I try to run these functions the system keeps rebooting. For example:

    clear_screen(COLOR(0,0,255));
    screen_swap();

A link to my github repo for more context



2023-11-22

how to pass an assertion in if condition using cypress without halting the execution in case of assertion failure

I am trying to pass an assertion to if condition and execute a logic when the condition is met and another logic when condition is failed.

Since the test is failing on failure of assertion i am not able to achieve the desired result.

I tried the following...

if(cy.get("div").length\>0)

{

cy.log("print this")

}

else

{

cy.log("print this")

}

or

if(cy.get("div").should('have.length.greaterThan',0)

{

cy.log("print this")

}

else

{

cy.log("print this")

}


Find the minimum value for each unique key without using a for loop

I have a numpy array with keys (e.g. [1, 2, 2, 3, 3, 2]) and an array with values (e.g. [0.2, 0.6, 0.8, 0.4, 0.9, 0.3]). I want to find the minimum value associated with each unique key without using a for loop. In this example, the answer is {1: 0.2, 2: 0.3, 3: 0.4}. I asked ChatGPT and New Bing but they keep giving me the wrong answer. So, is it really possible to do this without a for loop?

Edit 1: What I'm trying to achieve is the fastest speed. Also, in my case, most keys are unique. I considered using np.unique to acquire every key and then compute the min value for every key, but clearly it requires a for loop and a quadratic time. I also considered sorting the arrays by keys and apply np.min on the values of each key, but I doubt its efficiency when most keys are unique. Additionally, according to the comments, pandas.DataFrame has a groupby method which might be helpful, but I'm not sure if it's the fastest (perhaps I'm going to try on my own).

Edit 2: I don't necessarily need a dict as the output; it can be an array of unique keys and an array of min values, and the order of keys doesn't matter.



2023-11-21

Python - extend enum fields during creation

Is it possible extend enum during creation? Example:

class MyEnum(enum.StrEnum):
    ID = "id"
    NAME = "NAME

And I need that after creating this enum contains the next fields:

ID = "id"
NAME = "name"
ID_DESC = "-id"
NAME_DESC = "-name"

I need this to create custom ordering enum for FastAPI project

Now I have the next way to create new enum

NewEnum = enum.StrEnum(
    f"{name.title()}OrderingEnum",
    [
        (
            f"{ordering_field.upper()}_DESC"
            if ordering_field.startswith("-")
            else ordering_field.upper(),
            ordering_field,
        )
        for ordering_field in itertools.chain(
            values,
            [f"-{field}" for field in values],
        )
    ],
)

But I need do this automatically, because each module with model have similar enum. Maybe this is possible to solve my problem using MetaClass for my enum class, or override __new__ method, but I didn't find working solution yet



How to format bar chart yearly x-axis to not contain floats

What causes the x-axis to have Year numbers in between the bars and the 0.5 attached to the Population Year

ru = px.bar(wakel, x = "Population Year", y = "Population", color = "City")
ru.show()

enter image description here



Custom hook with clear and update time not works as per expected

When update the default time with setInterval, it's not working as expected. Instead it's added as a new instance. How to clear the setInterval in the custom hook and update new value?

app.jsx

import React from 'react';
import './style.css';
import CustomTimer from './Custom';
import { useState, useEffect } from 'react';

export default function App() {
  const [intervalTime, setIntervalTime] = useState(200);

  const time = CustomTimer(intervalTime);

  useEffect(() => {
    setTimeout(() => {
      console.log('Hi');
      setIntervalTime(500);
    }, 5000);
  });

  return (
    <div className="App">
      <h1>Hello CodeSandbox</h1>
      <h2>Start editing to see some happen! {time} </h2>
    </div>
  );
}

Custom.js

import { useEffect, useState } from 'react';

function CustomTimer(startTime) {
  const [timer, setTimer] = useState(startTime);
  useEffect(() => {
    const myInterval = setInterval(() => {
      if (timer > 0) {
        setTimer(timer - 1);
        console.log(timer);
      }
    }, 1000);
    return () => clearInterval(myInterval);
  }, [startTime]);
  return timer;
}

export default CustomTimer;

Live Demo => please check the console



2023-11-20

Mediation analysis with a tobit regression is failing to find the outcome variable

I am trying to run a mediation analysis with the mediation package in R. My outcome variable needs to be modeled with a tobit model (censored data).

When I try to run it, it claims that the outcome variable cannot be found, although it is in the dataframe. See reproducable example:

library(mediation)
test <- data.frame(mediator = c(0.333,0.201,0.343,0.133,0.240),
                   DV = c(0.152,2.318,0.899,0.327,1.117),
                   outcome=c(1.715,1.716,0.544,3.284,3.599))
mediator_model <- lm(mediator ~ DV, data = test)
outcome_model <- vglm(outcome ~ mediator + DV,
                      tobit(Upper = 4, Lower = -4), link = "identity",data = test)

med <- mediate(mediator_model, outcome_model, treat = "DV", mediator = "mediator")

When I run this, I get the error Error in eval(predvars, data, env) : object 'outcome' not found, even though the outcome model runs without a problem.



2023-11-19

Extract string from image using pytesseract

I am a newbie on OCR manipulation and extraction data from images. After searching for solution I did find some code but it didn't work for my use case, it didn't extract correctly all characters, at most 2 of them.

I want to get the characters on this image:

example

I tried this solution:

image = cv2.imread('./images/screenshot_2023_11_16_15_41_24.png')

# Assuming 4 characters in a 36x9 image
char_width = image.shape[1] // 4
char_height = image.shape[0]

characters = []
characters_slices = [(0, 9), (9, 18), (18, 27), (27, 36)]  # Adjust based on your image
for start, end in characters_slices:
    char = image[0:char_height, start:end]
    characters.append(char)

# Perform OCR on each character
extracted_text = ""
for char in characters:
    char_text = pytesseract.image_to_string(char, config='--psm 10 --oem 3 -c char_whitelist=ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')
    extracted_text += char_text.strip() + " "

print("Extracted Text:", extracted_text)

Output would be: 'H9FA'

Thanks.



2023-11-18

Makefile won't find *.lib but path is declared

I am trying to compile a plugin example for a software from autodesk.

here is the Makefile

##############################################################################
#           Makefile for use by API developers                               #
#                                                                            #
#  NOTE: "vcvarsall amd64" must be run before attempting to compile the API  #
#  examples. Please see the API documentation for information.               #
#                                                                            #
##############################################################################

#
# If the location of the Alias libraries and header files are
# different from $ALIAS_LOCATION, set it here.
#
ALIAS_LOCATION=C:\Program Files\Autodesk\AliasSurface2023.0


CPPEXAMPLES = cppCube.exe

EXAMPLES = $(CPPEXAMPLES)

CC = cl.exe
CPLUSPLUS = cl.exe
LINK = link.exe

INCLUDES = /I. /I"$(ALIAS_LOCATION)\ODS\Common\include" /I"C:\Program Files\Windows Kits\10\Include\10.0.22621.0\ucrt" /I"C:\Program Files\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\include"

# 
# Dynamic Linking.
#
EXTRA_LFLAGS = /LIBPATH:"$(ALIAS_LOCATION)\lib";"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\x64" /VERBOSE
EXTRA_CFLAGS = 
LFLAGS = /nologo /SUBSYSTEM:CONSOLE /NODEFAULTLIB:LIBC.LIB $(EXTRA_LFLAGS) /STACK:0xa00000

#
# Required libraries. 
#
LIBS = libalias_api.lib

STD = kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib \
     advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib \
     odbc32.lib odbccp32.lib comctl32.lib netapi32.lib \
     version.lib ws2_32.lib

CLIBS = $(LIBS) $(STD)

CFLAGS = /nologo /MD $(INCLUDES) $(COPIOUS_OUTPUT) $(EXTRA_CFLAGS)
CPLUSPLUSFLAGS = $(CFLAGS)

#
# Rules for building.
#
.SUFFIXES: .c .c++ .obj .cpp

.cpp.obj:
    $(CPLUSPLUS) -c $(CPLUSPLUSFLAGS) $*.cpp

.c.obj:
    $(CC) -c $(CFLAGS) $*.c

#
# Build all the examples.
#
default: $(EXAMPLES)

#
# Copy all the source files for the examples.
#
copy:
    copy "$(ALIAS_LOCATION)\ODS\OpenModel\examples\*.cpp" .
    copy "$(ALIAS_LOCATION)\ODS\OpenModel\examples\*.c" .
    copy "$(ALIAS_LOCATION)\ODS\Common\examples\*.cpp" .
    copy "$(ALIAS_LOCATION)\ODS\Common\examples\*.c" .
    copy "$(ALIAS_LOCATION)\ODS\Common\examples\*.h" .

#
# Clean up.
#
clean:
    del *.obj *.exp *.lib $(EXAMPLES)

#
# Rules for building the executables.

cppCube.exe:            cppCube.obj
    $(LINK) $(LFLAGS) /out:$@ cppCube.obj $(CLIBS)

problem is that when I try to run nmake, it doesn't find the file that instead are present in the directories mentioned:

$ nmake

Microsoft (R) Program Maintenance Utility Version 14.37.32825.0
Copyright (C) Microsoft Corporation.  All rights reserved.

        link.exe /nologo /SUBSYSTEM:CONSOLE /NODEFAULTLIB:LIBC.LIB /LIBPATH:"C:\Program Files\Autodesk\AliasSurface2023.0\lib";"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\x64" /STACK:0xa00000 /VERBOSE /out:cppCube.exe cppCube.obj libalias_api.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib  advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib  odbc32.lib odbccp32.lib comctl32.lib netapi32.lib  version.lib ws2_32.lib

Starting pass 1
Processed /DEFAULTLIB:MSVCRT
Processed /DEFAULTLIB:OLDNAMES
LINK : fatal error LNK1181: cannot open input file 'libalias_api.lib'
NMAKE : fatal error U1077: 'link.exe /nologo /SUBSYSTEM:CONSOLE /NODEFAULTLIB:LIBC.LIB /LIBPATH:"C:\Program Files\Autodesk\AliasSurface2023.0\lib";"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\x64" /STACK:0xa00000 /VERBOSE /out:cppCube.exe cppCube.obj libalias_api.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib  advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib  odbc32.lib odbccp32.lib comctl32.lib netapi32.lib  version.lib ws2_32.lib' : return code '0x49d'
Stop.

I run $ "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" amd64 getting this message [vcvarsall.bat] Environment initialized for: 'x64' .

I have isntalled Visual Studio Tools and running from a Administrator Developer Command Prompt.

is the errror only complaining of 'libalias_api.lib'? or also for kernel32.lib? it is phrased in an ambiguous way.

Here is the guide that I followed: https://help.autodesk.com/view/ALIAS/2023/ENU/?guid=GUID-D9756922-3960-4FC6-AFFC-940A0A5E8C7F

--- UPDATE 1 --- After installing Windows SDK and still get LINK error



2023-11-17

Building correct API query URL to filter data from clinicaltrials.gov by multiple keywords

I am trying to get some data from a public API and need some help figuring out the correct query syntax for the request URL.

Below is my script. (Never mind fixing or improving the function, it is working well enough so far.)

What I need is the correct query URL.

I would like to get a list of clinical studies from clinicaltrials.gov for search term "EGFR", but narrow the search down so that only results are returned that have "Recruiting" OR "Active, not recruiting" in the "OverallStatus" field. Here are the possible values for the "OverallStatus" field.

I am having a hard time figuring out the API docs. There is a page with Search Expressions and Syntax, but they don't explain how to search for multiple values. How do I build the query string to search for multiple possible values in a field?

I appreciate any insights!

library(tidyverse)
library(httr)
library(jsonlite)
library(glue)


get_studies_df <- function(query_url){
  
  # get clinical studies data
  res <- httr::GET(query_url)
  
  if(!httr::status_code(res) == 200){
    #if request failed return empty data frame
    empty_df <- stats::setNames(data.frame(matrix(ncol = 5, nrow = 0)), c("Rank", "NCTId", "Condition", "BriefTitle", "OverallStatus"))
    return(empty_df)
  }
  
  # get data from response obj
  data <- httr::content(res, as="text", encoding = "UTF-8") %>%
    jsonlite::fromJSON()
  
  # prepare clinical studies data frame
  studies_df <- data$StudyFieldsResponse$StudyFields %>%
    # combine conditions if there is more than one
    dplyr::rowwise() %>%
    mutate(Condition = paste(Condition, collapse = ", ")) %>%
    dplyr::ungroup()
  # unlist data frame columns to show full length text
  for (i in c(1:ncol(studies_df))){
    studies_df[,i] <- unlist(studies_df[,i])
  }
  
  return(studies_df)
  
}
 

### here are all the query strings I tried ###

# get all studies for EGFR (WORKING, but finds 5000+ studies, way too many)
query_url <- "https://ClinicalTrials.gov/api/query/study_fields?expr=EGFR&fields=NCTId,Condition,BriefTitle,OverallStatus&fmt=json"

# get "Recruiting" studies only (WORKING)
query_url <- "https://ClinicalTrials.gov/api/query/study_fields?expr=EGFR+AREA[OverallStatus]+Recruiting&fields=NCTId,Condition,BriefTitle,OverallStatus&fmt=json"

# get "Active" studies only (WORKING)
query_url <- "https://ClinicalTrials.gov/api/query/study_fields?expr=EGFR+AREA[OverallStatus]+Active&fields=NCTId,Condition,BriefTitle,OverallStatus&fmt=json"

### I'm trying to get "Recruiting" OR "Active" studies. These are NOT WORKING ###

# returns only "Active"
query_url <- "https://ClinicalTrials.gov/api/query/study_fields?expr=EGFR+AREA[OverallStatus]+Recruiting+Active&fields=NCTId,Condition,BriefTitle,OverallStatus&fmt=json"

# returns nothing
query_url <- "https://ClinicalTrials.gov/api/query/study_fields?expr=EGFR+AREA[OverallStatus]+RANGE[Recruiting,Active]&fields=NCTId,Condition,BriefTitle,OverallStatus&fmt=json"

# returns only "Active"
query_url <- "https://ClinicalTrials.gov/api/query/study_fields?expr=EGFR+AREA[OverallStatus]+Recruiting+AREA[OverallStatus]+Active&fields=NCTId,Condition,BriefTitle,OverallStatus&fmt=json"

# returns everything ("Recruiting", "Completed", "Unknown status", "Active, not recruiting") ??
query_url <- "https://ClinicalTrials.gov/api/query/study_fields?expr=EGFR+AREA[OverallStatus]+Recruiting+OR+Active&fields=NCTId,Condition,BriefTitle,OverallStatus&fmt=json"


df <- get_studies_df(query_url)

Output table:

enter image description here



NextJS 13 in Azure: process.env.{SETTING_NAME} in server components is undefined

I upgraded to NextJS v13 from v12. I thought components inside the src/app folder are server components by default but when I try to use the values from process.env after deploying with Azure, its returning undefined for those configuration settings.

It works when I run it locally maybe because those settings are in a .env file but after deployment it should get it from the Azure configuration settings. Here is the sample page:

src/app/page.tsx

import SampleClientView from '@components/components/client/SampleClientView'; // This has 'use client';

const SamplePage = () => {
  const configOne = process.env.CONFIG_ONE;
  const configTwo = process.env.CONFIG_TWO;

  return (
    <SampleClientView
      configOne={configOne}
      configTwo={configTwo}
    />
  );
};

export default SamplePage;


2023-11-16

LINQ Exception when Lists of Documents containing Lists - Querying MongoDB Entity Framework Core

We are doing some development on the EF Core Library for MongoDB. It's in preview, so I'm trying to work out if this is a bug or a feature (or lack of a feature).

We are not doing any queries on these elements yet, but when the problematic collection is queried, if the problematic element is defined, we're running into problems.

    public class Template
{
    public ObjectId _id { get; set; }
    public string name { get; set; }
    public bool inheritFrom { get; set; }
    public List<CallRound> callRounds { get; set; } //this is a list of sub documents with a list, it complains
    public Qualifier qualifiers { get; set; } // this is a sub document with a list, it doesn't complain
}
public class CallRound
{
    public List<CollectionReference>? jobQualifiers { get; set; } //this is the offender, if we comment out this code, the query functions
    public bool isOvertime { get; set; }
    public bool offerOrientedOnly { get; set; }
    public string genderRequirements { get; set; }

}
public class Qualifier
{
    public ObjectId worksiteId { get; set; }
    public ObjectId jobClassificationId { get; set; }
    public List<CollectionReference> jobQualifiers { get; set; }
    public string genderRequirements { get; set; }
}

Question is, is there something wrong with my mapping? Or the library itself? Here's the mapping.

protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        base.OnModelCreating(modelBuilder);
        modelBuilder.Entity<Callout>().ToCollection("Callout");
        modelBuilder.Entity<Template>().ToCollection("CalloutTemplate")
                                                .OwnsMany<CallRound>("callRounds")
                                                .OwnsMany<OtherCollectionReference>("jobQualifiers").WithOwner();
        modelBuilder.Entity<JobClassification>().ToCollection("JobClassification");
        modelBuilder.Entity<JobQualifier>().ToCollection("JobQualifier");
        modelBuilder.Entity<Region>().ToCollection("Region");
        modelBuilder.Entity<Worksite>().ToCollection("Worksite");
    }

Finally, here's the call stack:

Result: Function 'CalloutBuilder', Invocation id '911c14d5-8c51-4148-b67a-6ce7588a383f': An exception was thrown by the invocation.
Exception: System.AggregateException: One or more errors occurred. (The LINQ expression 'o' could not be translated. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by inserting a call to 'AsEnumerable', 'AsAsyncEnumerable', 'ToList', or 'ToListAsync'. See https://go.microsoft.com/fwlink/?linkid=2101038 for more information.)


Deploy Container App from Bitbucket to Azure

I have a Bitbucket repository which builds my code with a pipeline and pushes a docker image to Docker Hub. So far, so good. Now I want to continues deploy the latest image to my Container App on Azure. My options seems to be:

  1. Setup Continuous Deployment in Azure
  2. Create a pipeline step in bitbucket to push the new image created to Azure with Azure CLI

My problem with 1. is that it seems to be only support for GitHub with it required. Azure continuous deployment

And my problem with 2. is that it doesnt look like Atlassian has this supported

Atlassian azure pipes

Which leaves me with some costum created pipeline where Im suppose to do this with Azure CLI where Im way out of my depth.

answer from other question

Does anyone have a suggestion to how I can automaticly update my Container App?



2023-11-15

Error saying "git is not installed" but it actually is [closed]

This error appears in the cmd window Image of errorwhen I try to run "gradlew.bat" from the baritone repository "https://ift.tt/Qq6blna". Also it is definitely not the file's fault because other people don't have this issue.

I tried running the gradlew.bat file and expected it to create/build a "dist" folder which is supposed to contain artifacts aka .jar files of the MC mod baritone, result is it didn't get created. File of gradlew.bat https://pastebin.com/FEzqR9FR code

`

@rem
@rem Copyright 2015 the original author or authors.
@rem
@rem Licensed under the Apache License, Version 2.0 (the "License");
@rem you may not use this file except in compliance with the License.
@rem You may obtain a copy of the License at
@rem
@rem      https://www.apache.org/licenses/LICENSE-2.0
@rem
@rem Unless required by applicable law or agreed to in writing, software
@rem distributed under the License is distributed on an "AS IS" BASIS,
@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@rem See the License for the specific language governing permissions and
@rem limitations under the License.
@rem

@if "%DEBUG%" == "" @echo off
@rem ##########################################################################
@rem
@rem  Gradle startup script for Windows
@rem
@rem ##########################################################################

@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal

set DIRNAME=%~dp0
if "%DIRNAME%" == "" set DIRNAME=.
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%

@rem Resolve any "." and ".." in APP_HOME to make it shorter.
for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi

@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"

@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome

set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if "%ERRORLEVEL%" == "0" goto execute

echo.
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.

goto fail

:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe

if exist "%JAVA_EXE%" goto execute

echo.
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.

goto fail

:execute
@rem Setup the command line

set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar


@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %*

:end
@rem End local scope for the variables with windows NT shell
if "%ERRORLEVEL%"=="0" goto mainEnd

:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
if  not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
exit /b 1

:mainEnd
if "%OS%"=="Windows_NT" endlocal

:omega

`



2023-11-14

Keycloak oidc authentication issue on K8s having replica of application server

I am facing an issue with authorization_code grant type on a replica set up in a K8s cluster and seeking for advice and help. My set up is as follows:

  1. 1 instance of Keycloak server running on 1 pod on 1 node.
  2. 2 instances of backend server running on 2 pods on 2 different nodes. (say api1 and api2)

Basically, the problem here is, suppose if the api1 initiates a code verification challenge to the Keycloak during the authentication workflow, and the user after successfully authenticating with Keycloak with a valid username and password, the Keycloak would then invoke the redirectURI of the backend server. However, the redirectURI instead of hitting api1, hits the other instance of backend server api2. And due to this the session state of the Request object for api2 would not have the code_verifier property because of which we are unable to call the /protocol/openid-connect/token api to get the access token.

What I am trying to achieve is either have the redirectURI always hit the same backend server instance that initiated the request OR if there is a way for the backend servers (api1 and api2) to share the sessions so that irrespective of who initiates the request the session will always hold the code_verifier value upon successful authentication with Keycloak. I know this is not a Keycloak specific issue, rather more of K8s thing (I suppose), but if anyone has also encountered this situation before and have managed to do a proper resolution (without compromising HA) then kindly share your knowledge here.

I tried to check if I can attach a sticky session between the Keycloak and backend server so that the redirectURI always hits the same backend server that started the auth request, but unfortunately couldn't find any leads nor any similar problem posted in the community.

Any help or advice is much appreciated. Thanks



2023-11-13

Iterate over list of YouTube/RTSP streams, add text overlays, and expose as a fixed RTSP endpoint

My goal is a shell script or Python utility that cycles through list (.csv, .yaml, or .json) of YouTube/RSTP source streams in a format similar to the following (.csv) example below:

url,overlay_text,delay_ms
rtsp://admin:12345@192.168.1.210:554/Streaming/Channels/101,THIS IS OVERLAY TEXT,5000
https://www.youtube.com/watch?v=dQw4w9WgXcQ,THIS IS MORE OVERLAY TEXT,5000
.
.
.
rtsp://admin:12345@192.168.1.210:554/Streaming/Channels/101,THIS IS OVERLAY TEXT,5000
https://www.youtube.com/watch?v=dQw4w9WgXcQ,THIS IS MORE OVERLAY TEXT,5000

For each record in the text file, the utility will:

  • Capture the stream from the specified source URL
  • Add the overlay_text for that record to the stream
  • Proxy or otherwise expose it as a fixed/unchanging RTSP endpoint
  • Wait delay_ms for that record
  • Kill that stream, go on to the next one, and repeat...exposing the next stream using the same RTSP endpoint. So, to a consumer of that RTSP stream, it just seems like a stream that switched to a different source.
  • When it reaches the last record in the text file, go back to the beginning

It could be as simple as a Bash shell script that reads the input text file and iterates through it, running a Gstreamer gst-launch-1.0 command w the appropriate pipeline arguments.

I can handle the reading of the text file and the iteration in either Bash or Python. I just need to know the proper way to invoke (and kill) gstreamer to add the text overlay and expose as an RTSP endpoint.



2023-11-12

how to get rid of peter panning completely using cascaded shadow maps?

I am making a voxel open world game with C and opengl 4.0. I implemented cascaded shadow maps using this tutorial: https://learnopengl.com/Guest-Articles/2021/CSM

I cant get rid of peter panning no matter how I set the bias variable. My shadows looks like this: current shadows

As you can see there are thin gaps between cubes and their shadows. I dont want that.

This is how I set bias variable:

float bias = max(0.005f * (1.0f - dot(normal, lightDirection)), 0.00028f);
const float biasModifier = 0.2f;
if(layer==0){
bias *= 1 / (cascade0range * biasModifier);
}
else if(layer==1){
bias *= 1 / (cascade1range * biasModifier);
}
else if(layer==2){
bias *= 1 / (cascade2range * biasModifier);
}
else if(layer==3){
bias *= 1 / (cascade3range * biasModifier);
}

If you want to see all of the code, this is the repository of the project: https://github.com/SUKRUCIRIS/OPENGL_VOXEL_GSU

I tried to decrease bias variable more but it caused shadow acne and even then the gap was still there: shadow acne still there is a gap



Recursively adding columns to pyspark dataframe nested arrays

I'm working with a pyspark DataFrame that contains multiple levels of nested arrays of structs. My goal is to add an array's hash column + record's top level hash column to each nested array. To achieve that for all nested arrays I need to use recursion since I do not know how nested the array can be.

So for this example schema

schema = StructType([
    StructField("name", StringType()),
    StructField("experience", ArrayType(StructType([
        StructField("role", StringType()),
        StructField("duration", StringType()),
        StructField("company", ArrayType(StructType([
            StructField("company_name", StringType()),
            StructField("location", StringType())
        ])))
    ])))
])

The desired output schema would look like this:

hashed_schema = StructType([
    StructField("name", StringType()),
    StructField("experience", ArrayType(StructType([
        StructField("role", StringType()),
        StructField("duration", StringType()),
        StructField("experience_hash", StringType()),  # Added hash for the experience collection
        StructField("company", ArrayType(StructType([
            StructField("company_name", StringType()),
            StructField("location", StringType()),
            StructField("company_hash", StringType())  # Added hash for the company collection
        ])))
    ]))),
    StructField("employee_hash", StringType()),  # Added hash for the entire record
])

I have tried to write a code with recursion that would iterate trough each nested array and hash its columns. While it seems to work for 1st level nested arrays, the recursion part does not work, I get an error that the recursion is too deep.


def hash_for_level(level_path):
    return md5(concat_ws("_", *[lit(elem) for elem in level_path]))

def add_hash_columns(df, level_path, current_struct, root_hash_col=None):
    # If this is the root level, create the root hash
    if not level_path and root_hash_col is None:
        root_hash_col = 'employee_hash'
        df = df.withColumn(root_hash_col, hash_for_level(['employee']))
    
    # Traverse the current structure and add hash columns
    for field in current_struct.fields:
        new_level_path = level_path + [field.name]
        # If the field is an array of structs, add a hash for each element in the array
        if isinstance(field.dataType, ArrayType):
            nested_struct = field.dataType.elementType
            hash_expr = transform(
                col('.'.join(level_path + [field.name])),
                lambda x: x.withField(new_level_path[-1] + '_hash', hash_for_level(new_level_path))
                    .withField(root_hash_col, col(root_hash_col))  # Include the root hash
            )
            # Add the hash column to the array elements
            df = df.withColumn('.'.join(level_path + [field.name]), hash_expr)
            # Recursion call to apply the same logic for nested arrays
            df = add_hash_columns(df, new_level_path, nested_struct, root_hash_col)
            
    # Add a hash column at the current level
    if level_path:
        #print("Level path:", level_path)
        hash_col_name = '_'.join(level_path) + '_hash'
        df = df.withColumn(hash_col_name, hash_for_level(level_path))
        if root_hash_col:
            # Ensure the root hash is included at each struct level
            df = df.withColumn(root_hash_col, col(root_hash_col))
            
    return df

df = spark.createDataFrame([], schema)
df = add_hash_columns(df, [], df.schema)
df


getting error when uploading SignUpOrSignIn custom policy file, I want to include user identities with id token

I want to fetch the sign-in user's identities array from the tenant and include it in the id token (add in the RelyingParty outputclaim),Would like to know how to add user's properties mainly identities with id token.

   <ClaimType Id="identities">
    <DisplayName>Identities</DisplayName>
    <DataType>stringCollection</DataType>
  </ClaimType>                   
        
<ClaimsProvider>
  <DisplayName>Azure Active Directory</DisplayName>
  <TechnicalProfiles>
    <TechnicalProfile Id="AAD-UserReadUsingObjectId">
      <DisplayName>Azure Active Directory</DisplayName>
      <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.SelfAssertedAttributeProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
      <CryptographicKeys>
        <Key Id="issuer_secret" StorageReferenceId="B2C_1A_TokenSigningKeyContainer" />
      </CryptographicKeys>                 
      <InputClaims>
        <InputClaim ClaimTypeReferenceId="objectId" Required="true" />
      </InputClaims>
      <OutputClaims>
         <OutputClaim ClaimTypeReferenceId="identities" />
      </OutputClaims>
    </TechnicalProfile>
  </TechnicalProfiles>
</ClaimsProvider>    
 <RelyingParty>
<DefaultUserJourney ReferenceId="SignUpOrSignIn" />
<Endpoints>
  <!--points to refresh token journey when the app makes refresh token request-->
  <Endpoint Id="Token" UserJourneyReferenceId="RedeemRefreshToken" />
</Endpoints>
<TechnicalProfile Id="PolicyProfile">
  <DisplayName>PolicyProfile</DisplayName>
  <Protocol Name="OpenIdConnect" />
  <OutputClaims>
    <OutputClaim ClaimTypeReferenceId="displayName" />
    <OutputClaim ClaimTypeReferenceId="givenName" />
    <OutputClaim ClaimTypeReferenceId="surname" />
    <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
    
    <OutputClaim ClaimTypeReferenceId="identities"/>
    <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
  </OutputClaims>   
  <SubjectNamingInfo ClaimType="sub" />
</TechnicalProfile>


Next.js instrumentation behaves differently with conditional extracted to variable

This issue came from an earlier issue I created, which was resolved, but the underlying behaviour is so baffling I'm desparate to know what is going on.

Next.js has an experimental feature called Instrumentation that allows code to be run on boot-up. It seems to run both server-side and client-side, so a special check is necessary if nodejs-dependent imports are to be used. I have working code that uses this functionality:

export async function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    const os = await require('os');
    console.log(os.hostname());
  }
}

However, the following code does not work:

export async function register() {
  const isServer = process.env.NEXT_RUNTIME === 'nodejs'
  if (isServer) {
    const os = await require('os');
    console.log(os.hostname());
  }
}

The error is:

- error ./instrumentation.ts:12:21
Module not found: Can't resolve 'os'
  10 |   const isServer = process.env.NEXT_RUNTIME === 'nodejs'
  11 |   if (isServer) {
> 12 |     const os = await require('os');
     |                     ^
  13 |     console.log(os.hostname());
  14 |   }
  15 | }

Obviously I can use the first one and be happy. But can anyone explain why the second fails? Perhaps something involving tree shaking, or caching, or...?

Here's a Stackblitz repro.



2023-11-11

CUDA issue with NER (Named Entity Recognition) for ML predictions

I'm attempting to use NamedEntityRecognition (NER)(https://github.com/dotnet/machinelearning/issues/630) to predict categories for words/phrases within a large body of text.

Currently using 3 Nuget packages to try get this working:

Microsoft.ML (3.0.0-preview.23511.1)

Microsoft.ML.TorchSharp (0.21.0-preview.23511.1)

Torchsharp-cpu (0.101.1)

At the point of training the model [estimator.Fit(dataView)], I get the following error:

Field not found: 'TorchSharp.torch.CUDA'.

I may have misunderstood something here, but I should be processing with CPU from the Torchsharp-cpu package and I'm not sure where the CUDA reference is coming from. This also appears to be a package reference rather than a field?

using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.TorchSharp;
using System;
using System.Collections.Generic;
using System.Windows.Forms;

namespace NerTester
{
    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();
        }

    private class TestSingleSentenceData
    {
        public string Sentence;
        public string[] Label;
    }

    private class Label
    {
        public string Key { get; set; }
    }

    private void startButton_Click(object sender, EventArgs e)
        {
        try
        {
                var context = new MLContext();
                context.FallbackToCpu = true;
                context.GpuDeviceId = null;

                var labels = context.Data.LoadFromEnumerable(
                new[] {
                new Label { Key = "PERSON" },
                new Label { Key = "CITY" },
                new Label { Key = "COUNTRY"  }
                });

                var dataView = context.Data.LoadFromEnumerable(
                    new List<TestSingleSentenceData>(new TestSingleSentenceData[] {
                    new TestSingleSentenceData()
                    {   // Testing longer than 512 words.
                        Sentence = "Alice and Bob live in the USA",
                        Label = new string[]{"PERSON", "0", "PERSON", "0", "0", "0", "COUNTRY"}
                    },
                     new TestSingleSentenceData()
                     {
                        Sentence = "Alice and Bob live in the USA",
                        Label = new string[]{"PERSON", "0", "PERSON", "0", "0", "0", "COUNTRY"}
                     },
                    }));
                var chain = new EstimatorChain<ITransformer>();
                var estimator = chain.Append(context.Transforms.Conversion.MapValueToKey("Label", keyData: labels))
                   .Append(context.MulticlassClassification.Trainers.NameEntityRecognition(outputColumnName: "outputColumn"))
                   .Append(context.Transforms.Conversion.MapKeyToValue("outputColumn"));

                var transformer = estimator.Fit(dataView);
                transformer.Dispose();
                
                MessageBox.Show("Success!");
            }
        catch (Exception ex)
            {
        MessageBox.Show($"Error: {ex.Message}");
            }
    }
    }
}

Application is running on x64 and the documentation for NER appears to be limited.

Any help would be greatly appreciated.

Tried changing the Nuget packages I'm referencing, including the use if libtorch packages.

Attempted running the application in x86 and x64 configuration.

Added code to try force CPU usage rather than GPU (CUDA).



2023-11-10

Understanding QuickGrid internals: "Defer hack"

I am studying the source code of the QuickGrid from Blazor (ASP.NET Core 8). The implementation leverages some internal knowledge on how Blazor handles the actual rendering in order to collect all ColumnBase child components. It does so by initiating and ending a "collecting session" and during this session all ColumnBase child components attach themselves to the cascaded grid context.

<CascadingValue TValue="InternalGridContext<TGridItem>" IsFixed="true" Value="@_internalGridContext">
    @{ StartCollectingColumns(); }
    @ChildContent
    <Defer>
        @{ FinishCollectingColumns(); }
        <ColumnsCollectedNotifier TGridItem="TGridItem" />

        @* HTML table... *@
    </Defer>
</CascadingValue>

The ColumnBase components inside the ChildContent execute the following code in their BuildRenderTree method:

InternalGridContext.Grid.AddColumn(this, InitialSortDirection, IsDefaultSortColumn);

The Defer component ist built like this:

// This is used by QuickGrid to move its body rendering to the end of the render queue so we can collect
// the list of child columns first. It has to be public only because it's used from .razor logic.
public sealed class Defer : ComponentBase
{
    [Parameter] public RenderFragment? ChildContent { get; set; }

    protected override void BuildRenderTree(RenderTreeBuilder builder)
    {
        builder.AddContent(0, ChildContent);
    }
}

There is also a comment in the Defer component explaining what it does which I do understand. However, I do not exactly understand how and why this works. Can someone explain to me the details on how and why this works?

It somehow suggests that RenderFragments are delayed when rendering. But thats not really intuitive to me. I am thinking of the rendering as some sort of a left-order tree traversal of the nodes including the RenderFragments. But it almost looks like RenderFragments are not traversed initially.



Can I connect to cloud sql postgres using Private IP from my computer (locally) using Python?

I'm trying to connect to a cloud sql postgreSQL instance using Python code from my local machine (locally) and using the Private IP of my cloud sql instance.

from google.cloud.sql.connector import Connector, IPTypes
import pg8000

import sqlalchemy
from sqlalchemy import text

def connect_with_connector_auto_iam_authn():

instance_connection_name = "connection name"
db_user = "SA name" # e.g. 'my-db-user'
db_name = "postgres"  # e.g. 'my-database'

ip_type = IPTypes.PRIVATE

# initialize Cloud SQL Python Connector object
connector = Connector()

def getconn() -> pg8000.dbapi.Connection:
    conn: pg8000.dbapi.Connection = connector.connect(
        instance_connection_name,
        "pg8000",
        user=db_user,
        password="",
        db=db_name,
        enable_iam_auth=True,
        ip_type=ip_type
    )
    return conn
# The Cloud SQL Python Connector can be used with SQLAlchemy
# using the 'creator' argument to 'create_engine'
pool = sqlalchemy.create_engine(
    "postgresql+pg8000://",
    creator=getconn, pool_pre_ping=True
)

with pool.connect() as conn:
    results = conn.execute(text("SELECT current_user, current_database();"))
    for row in results:
        print(row)
print("connected")


return "connected"

connect_with_connector_auto_iam_authn()

I'm getting the following error message:

TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond The above exception was the direct cause of the following exception:

sqlalchemy.exc.InterfaceError: (pg8000.exceptions.InterfaceError) Can't create a connection to host 10.82.1.2 and port 3307 (timeout is None and source_address is None).

I'm thinking maybe I cannot use the private IP from my machine and the only way to use private ip is within the same VPC, this means using another GCP resource.

Thanks in advance!



Join statements with multiple filters in Spring Data JPA

I am using Spring Data JPA to query records from a database.

This is my SQL query

SELECT t1.id
FROM test1 t1 LEFT OUTER JOIN test2 t2 ON t1.id = t2.id
WHERE t2.key = 'keyNames'  
and t2.value  IN 'a,b,c'
and to_timestamp(t1.createdtime,'YYYY-MM-DD"T"HH24:MI:SSxff3"Z"') >= (SYSDATE - INTERVAL '12' HOUR);

I have created test1 and test2 entities with @OnetoMany association and the repositories.

public interface Test1Repository extends JpaRepository<Test1, Long>, JpaSpecificationExecutor<Test1> {
}

public interface Test2Repository extends JpaRepository<Test2, Long>, JpaSpecificationExecutor<Test2> {
}

public class Test1 {
 @Id
 @Column(name = "ID", nullable = false)
 private Long Id;

 @Column(name = "CREATED_DATE", nullable = false)
 @JsonFormat(pattern = "yyyy-MM-dd'T'HH:mm:ss'Z'")
 @CreationTimestamp
 private LocalDateTime createdDate;

 @OneToMany(mappedBy = "t1", fetch = FetchType.EAGER)
 @ToString.Exclude
 private Set<Test2> test2;

}


public class Test2{
 @Id
 @Column(name = "ID", nullable = false)
 private Long Id;

 @ManyToOne(optional = false)
 @JoinColumn(name = "id", nullable = false)
 @ToString.Exclude
 @JsonIgnore
 private Test1 test1;

 @Column(name = "key", length = 256)
 private String key;

 @Column(name = "value", length = 256)
 private String value;
}

I have the t1 specifications for the join condition as below:

public class Test1Specifications {
    public static Specification<Test1> hasTestWithValue(List<String> values){
        return (root, query, criteriaBuilder) -> {
            Join<Test1, Test2> test1test2Join = root.join("ids");
            return criteriaBuilder.equal(test1test2Join.get("key"),"keyNames");
        };
    }
}

public class Test1Service{
 private final Test1Repository test1Repository;
 private final Test2Repository test2Repository;

 public Test1Service(Test1Repository test1Repository, Test2Repository test2Repository){
 this.test1Repository = test1Repository;
 this.test2Repository = test2Repository;
}

public List<String> getIds()(List<String> values){
 List<String ids = null;
 Specification<Test1> filters = Test1Specifications.hasTestWithValue(values);
 ids = test1Repository.findAll(filters)
 .stream()
 .map(Test1::getId()
 .collect(Collectors.toList());

 return ids;
}

}

I am not able to figure out how to add the remaining two filters. I would really appreciate if someone could help me understand how to add multiple filter conditions as mentioned in the Query above?



2023-11-09

Qt Context Menu for each item in ListView [duplicate]

I use PyQt5 + Qt Designer 5. Is it possible to show Context Menu when I rightclick on item in ListView area?

The best I've got to is context menu opening when I click anywhere inside of ListView area but not the one item.



Ado.net connection to mysql in ssis

I have 60 table in my sql in 60 different servers from 192.168.69.1 to 192.168.69.60

When I create a single mysql connection and read data from first table its worked.

But when I create a parameterized connection with expression, it's not working and I see this error :

[ADO NET Source [2]] Error: An error occurred executing the provided SQL command: "select * from mdl_adobeconnect". SELECT command denied to user 'biteam'@'192.168.12.20' for table 'mdl_adobeconnect'

But I have full permission on all tables.



Missing gems when running bundler exec fastlane

I'm trying to run fastlane locally on a M1 MacBook. After running bundler exec fastlane deploy I get the following exception:

Could not find json-2.6.3, digest-crc-0.6.5, unf_ext-0.0.8.2 in locally installed gems
Run `bundle install` to install missing gems.

No exception is thrown on running bundle install and it seems like the missing gems are installed, but if I run bundler exec fastlane deploy again, I get the same exception as before.

Any tips on how I can fix this ?



How to dynamically/programmatically subscribe to a datasource from a ThingsBoard widget?

I'm working with a ThingsBoard widget and I'm looking for a way to programmatically subscribe to a data source where the field/attribute or timeseries key is not predetermined.

Currently, I can utilize the dashboard state to subscribe to entities (assets, devices) or clients dynamically. However, this approach requires prior knowledge of the attribute name, which in my case is dynamic.

Is there a method or API within ThingsBoard that allows for such dynamic attribute subscriptions within a widget? Any examples or documentation pointers would be highly appreciated.



2023-11-08

How can I obtain an image's PPI in JavaScript if it's unavailable in EXIF Data?

I have a tool in frappe-framework that permits image uploads by checking, the width[=1050px], height[=1500px] and filesize [between 300KB to 3000KB] of an image. Along with the existing requirements I need to add an additional check to permit image upload if PPI = 300. I am able to get the PPI using the EXIF metadata in javascript.

Is there an alternative approach or library I can use to get an image's PPI when it's not present in the EXIF data?



2023-11-07

Spring Boot application OracleDriver claims to not accept jdbcUrl

When launching a Spring Boot application in IntelliJ using the Spring Boot Application run configuration, running with Java 1.8, I am receiving the following message (only one listed for brevity's sake - but the same exception for each of the attempted URLs):

 Driver oracle.jdbc.OracleDriver claims to not accept jdbcUrl, "jdbc:oracle:thin:@redacted.redacted.us-east-1.rds.amazonaws.com:1234/abcd"

I have seen the recommendations on this answer and this answer but I have been unsuccessful in determining the root of the problem.

My configurations are as follows - I am using EnvFile locally to provide for values that are normally handled by Vault in our deployed environments.

application.properties

spring.datasource.url="${DATASOURCE_URL}"
spring.datasource.driver-class-name="${SPRING_DATASOURCE_DRIVER-CLASS-NAME}"

environment value

DATASOURCE_URL=jdbc:oracle:thin:@redacted.redacted.us-east-1.rds.amazonaws.com:1521/abcd

# I have tried the following
# jdbc:oracle:thin:@redacted.redacted.us-east-1.rds.amazonaws.com:1521/abcd
# jdbc:oracle:thin:@redacted.redacted.us-east-1.rds.amazonaws.com:1521:abcd
# jdbc:oracle:thin://@redacted.redacted.us-east-1.rds.amazonaws.com:1521:abcd
# jdbc:oracle:thin://@redacted.redacted.us-east-1.rds.amazonaws.com:1521/abcd

SPRING_DATASOURCE_DRIVER-CLASS-NAME=oracle.jdbc.OracleDriver

pom.xml

  <properties>
    <java.version>1.8</java.version>
  </properties>
...
    <dependency>
      <groupId>com.oracle.database.jdbc</groupId>
      <artifactId>ojdbc8</artifactId>
      <version>19.9.0.0</version>
    </dependency>

The URL format appears to be correct as compared to the previous answer on this issue. What else might be causing the problem here?



create map with types that implement interface

How to create a map with struct types that implements interface

I'm trying to create a map that stores the type for each column (database)

There is a few things wrong with the code. I don't really know how to solve it

Type_string and Type_int implements sql.Scanner with method Scan

I want to be able to fetch a non-predefined set of fields from a database. I don't know if this is the right approach?

Want a less strict version than just passing a predifined struct to rows.Scan()

Prefer a solution without reflect if possible

Types

type Type_string string

func (t *Type_string) Scan(value any) error {
    switch value := value.(type) {
    case []uint8:
        *t = Type_string(value)
    default:
        return fmt.Errorf("Invalid database type: %T %v", value, value)
    }
    return nil
}

type Type_int int

func (t *Type_int) Scan(value any) error {
    switch value := value.(type) {
    case int64:
        *t = Type_int(value)
    default:
        return fmt.Errorf("Invalid database type: %T %v", value, value)
    }
    return nil
}

Define table data types

type table_field struct {
    value_type  sql.Scanner
}

table_type := map[string]table_field{
    "id": table_field{
        value_type: Type_int{},
    },
    "name": table_field{
        value_type: Type_string{},
    },
}

Fetch from database

// Build pointer to pass to `rows.Scan()`
ptr     := make([]any, len(cols))
for i, name := range cols {
    ptr[i] = &table_type[name].value_type
}

if err := rows.Scan(ptr...); err != nil {
    fmt.Println("err:", err)
}

Error

invalid composite literal type Type_int
invalid composite literal type Type_string
invalid operation: cannot take address of table_type[name].value_type (value of type sql.Scanner)


2023-11-06

ansible xml module - set namespace & namespaced elements

I need to change the following xml

<network>
  <name>docker-machines</name>
  <uuid>ea91ff7c-aa2b-4fa9-b59d-d1fae70285ad</uuid>
  ...
</network>

to

<network xmlns:dnsmasq='http://libvirt.org/schemas/network/dnsmasq/1.0'>
  <name>docker-machines</name>
  <uuid>ea91ff7c-aa2b-4fa9-b59d-d1fae70285ad</uuid>
  ...
  <dns>
    <forwarder addr='5.1.66.255'/>
  </dns>
  <dnsmasq:options>
    <dnsmasq:option value='dhcp-option=option:ntp-server,10.10.14.1'/>
  </dnsmasq:options>
</network>

I have successfully managed to set the <dns><forwarder> elemnent using this:

- name: Set resolver for docker-machines network
  xml:
    path: /etc/libvirt/qemu/networks/docker-machines.xml
    xpath: /network/dns
    set_children:
      - forwarder:
          addr: 8.8.8.8

Unfortunately, creating the namespaced elements does not seem to be that easy. For example, I have tried this:

- name: Set ntp server for docker-machines network dhcp
  xml:
    path: /etc/libvirt/qemu/networks/docker-machines.xml
    xpath: /network/dnsmasq:options/dnsmasq:option
    namespaces:
      dnsmasq: http://libvirt.org/schemas/network/dnsmasq/1.0
    attribute: value
    value: "dhcp-option=option:ntp-server,10.10.14.1"

...but it does the correctly set the namespace on <network> and instead creates weird <ns0:> elements.

How can I do this correctly?



How do we create siblings composition column out of gender and family id and individual id columns in R

I applied the suggested code to the original dataset. But it didn't produced the desired result in the siblings_composition column such that 1 for at least 1 male sibling, 2 for at least 1 female sibling, 3 for both male and female siblings and 0 for no siblings. In the original dataset BIRIMNO is for family_id, CINSIYET is for gender and id is for individual_id. As an illustration I provide the result which is produced by the code below:

head(data)

# A tibble: 6 × 4
# Groups:   BIRIMNO [5]
  BIRIMNO CINSIYET       id siblings_composition
    <dbl> <fct>       <dbl>                <int>
1  144003 F        14400307                    3
2  144003 M        14400306                    3
3  144009 F        14400903                    3
4  144014 M        14401409                    3
5  144015 M        14401501                    2
6  144016 M        14401603                    3

For reproducability on the original dataset, the result of:

dput(head(data, 100))

structure(list(BIRIMNO = c(144003, 144003, 144009, 144014, 144015, 
144016, 144020, 144020, 144021, 144025, 144025, 144025, 144028, 
144028, 144029, 144031, 144034, 144036, 144039, 144040, 144042, 
144042, 144046, 144047, 144047, 144049, 144054, 144056, 144056, 
144060, 144061, 144069, 144071, 144071, 144071, 144071, 144073, 
144074, 144074, 144077, 144079, 144080, 144084, 144084, 144084, 
144088, 144088, 144090, 144092, 144092, 144092, 144094, 144113, 
144118, 144120, 144122, 144123, 144123, 144123, 144124, 144127, 
144127, 144129, 144129, 144130, 144134, 144137, 144138, 144151, 
144152, 144154, 144158, 144162, 144162, 144162, 144162, 144163, 
144163, 144163, 144167, 144172, 144172, 144176, 144176, 144181, 
144181, 144183, 144185, 144189, 144202, 144202, 144214, 144215, 
144217, 144219, 144224, 144224, 144247, 144247, 144249), CINSIYET = structure(c(2L, 
1L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 
1L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 
1L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 
2L, 2L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 2L, 
2L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 
1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 
1L, 1L, 2L), levels = c("M", "F"), class = "factor"), id = c(14400307, 
14400306, 14400903, 14401409, 14401501, 14401603, 14402003, 14402004, 
14402103, 14402503, 14402505, 14402506, 14402803, 14402804, 14402904, 
14403104, 14403404, 14403603, 14403903, 14404003, 14404205, 14404204, 
14404603, 14404703, 14404704, 14404905, 14405403, 14405603, 14405604, 
14406004, 14406103, 14406903, 14407109, 14407112, 14407111, 14407110, 
14407303, 14407403, 14407404, 14407706, 14407908, 14408006, 14408405, 
14408404, 14408403, 14408803, 14408804, 14409004, 14409204, 14409205, 
14409203, 14409405, 14411303, 14411804, 14412003, 14412203, 14412304, 
14412306, 14412305, 14412407, 14412704, 14412705, 14412906, 14412905, 
14413003, 14413403, 14413703, 14413804, 14415103, 14415203, 14415404, 
14415803, 14416207, 14416204, 14416206, 14416205, 14416306, 14416307, 
14416308, 14416704, 14417204, 14417205, 14417603, 14417604, 14418104, 
14418103, 14418303, 14418503, 14418903, 14420204, 14420203, 14421403, 
14421503, 14421704, 14421903, 14422403, 14422404, 14424704, 14424703, 
14424903), siblings_composition = c(3L, 3L, 3L, 3L, 2L, 3L, 3L, 
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 2L, 2L, 
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 2L, 3L, 3L, 3L, 
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 2L, 
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L)), class = c("grouped_df", 
"tbl_df", "tbl", "data.frame"), row.names = c(NA, -100L), groups = structure(list(
    BIRIMNO = c(144003, 144009, 144014, 144015, 144016, 144020, 
    144021, 144025, 144028, 144029, 144031, 144034, 144036, 144039, 
    144040, 144042, 144046, 144047, 144049, 144054, 144056, 144060, 
    144061, 144069, 144071, 144073, 144074, 144077, 144079, 144080, 
    144084, 144088, 144090, 144092, 144094, 144113, 144118, 144120, 
    144122, 144123, 144124, 144127, 144129, 144130, 144134, 144137, 
    144138, 144151, 144152, 144154, 144158, 144162, 144163, 144167, 
    144172, 144176, 144181, 144183, 144185, 144189, 144202, 144214, 
    144215, 144217, 144219, 144224, 144247, 144249), .rows = structure(list(
        1:2, 3L, 4L, 5L, 6L, 7:8, 9L, 10:12, 13:14, 15L, 16L, 
        17L, 18L, 19L, 20L, 21:22, 23L, 24:25, 26L, 27L, 28:29, 
        30L, 31L, 32L, 33:36, 37L, 38:39, 40L, 41L, 42L, 43:45, 
        46:47, 48L, 49:51, 52L, 53L, 54L, 55L, 56L, 57:59, 60L, 
        61:62, 63:64, 65L, 66L, 67L, 68L, 69L, 70L, 71L, 72L, 
        73:76, 77:79, 80L, 81:82, 83:84, 85:86, 87L, 88L, 89L, 
        90:91, 92L, 93L, 94L, 95L, 96:97, 98:99, 100L), ptype = integer(0), class = c("vctrs_list_of", 
    "vctrs_vctr", "list"))), class = c("tbl_df", "tbl", "data.frame"
), row.names = c(NA, -68L), .drop = TRUE))

Many thanks



2023-11-05

Can I create a map of key-value pairs by deserializing from JSON?

I'm trying to create a Map<TKey, TValue> instance by deserializing a JSON document, but what I'm actually getting seems to be a different type of object, with none of the methods that Map<TKey, TValue> has.

I'm quite new to TypeScript (I work mostly with C#) so I created a little unit test using Jest to check whether Map<TKey, TValue> does what I want, specifically whether I can use the forEach method to iterate through each of the key-value pairs.

function createMap(): Map<number, string> {
    const map = new Map<number, string>();
    map.set(1, 'foo');
    map.set(2, 'bar');
    console.log(map); // Console output: Map(2) { 1 => 'foo', 2 => 'bar' }
    return map;
}

function iterateUsingForEach(map: Map<number, string>): number {
    let counter = 0;
    map.forEach((value: string, key: number, map: Map<number, string>) => {
        // Normally I'd do something more exciting here than just count the
        // elements, but in the interests of simplicity...
        counter++;
    });
    return counter;
}

describe('A Map instantiated using its constructor', () => {
    it('can be iterated using map.forEach()', () => {
        const map = createMap();
        const count = iterateUsingForEach(map);
        expect(count).toBe(2); // pass
    });
});

So far so good, the test passes, and I have a list of key-value pairs and can iterate over each of them, which is what I want. But the data to populate this object comes from a call to a web API, which returns a JSON document, which I need to deserialize to a Map<number, string>, so I added some more tests to simulate this use case:

function deserializeMap(): Map<number, string> {
    const json = '{ "1": "foo", "2": "bar" }';
    const map: Map<number, string> = JSON.parse(json) /*as Map<number, string>*/;
    console.log(map); // Console output: { '1': 'foo', '2': 'bar' }
    return map;
}

function iterateUsingObjectDotEntries(map: Map<number, string>): number {
    let count = 0;
    for (let [key, value] of Object.entries(map)) {
        count++;
    }
    return count;
}

describe('A Map instantiated using deserialization', () => {
    it('can be iterated using map.forEach()', () => {
        const map = deserializeMap();
        const count = iterateUsingForEach(map); // fail - TypeError: map.forEach is not a function
        expect(count).toBe(2);
    });

    it('can be iterated using Object.entries(map)', () => {
        const map = deserializeMap();
        const count = iterateUsingObjectDotEntries(map);
        expect(count).toBe(2); // pass
    });
});

The first test fails because map.forEach is not a function, and looking at the console output, it seems that this time map isn't actually a Map<number, string> instance at all, but a plain object with properties called 1 and 2, which probably has no prototype and therefore no methods, despite the fact that the deserializeMap function explicitly declares its return type as Map<number, string> and even tries to cast its return value to that type (I'm not sure whether that cast even has any effect - edit as per jcalz's comment I've commented that "cast" out as it's not actually a cast but a type assertion).

I'm guessing that the type checking performed by the TypeScript compiler isn't complaining about this because map is still type compatible with Map<number, string> even though it's not actually an instance of that type?

The second test passes because it's not treating the map object as a list of key-value pairs, instead it's iterating through each of the property names and values. Functionally, using Object.entries(map) does what I want, but it strikes me as being analogous to using .net's Reflection to discover the names and values of the properties of an object at runtime, so I was wondering whether this would have an impact on performance. So I added this benchmark to compare the performance of the two approaches:

// Requires the tinybench package, to install it run:
// npm install tinybench --save-dev
// Also requires the following import:
// import { Bench } from 'tinybench';
it('runs a benchmark to compare the performance of the two approaches', async () => {
    const bench = new Bench({ iterations: 10000 });
    const instantiated = createMap();
    const deserialized = deserializeMap();

    bench
        .add('map.forEach()', () => {
            iterateUsingForEach(instantiated);
        })
        .add('Object.entries(map)', () => {
            iterateUsingObjectDotEntries(deserialized);
        });

    await bench.run();
    console.table(bench.table());
});

I've run this quite a few times and the results vary, but Object.entries(map) seems to take between 1.5 and 2 times as long as map.forEach() to do the same job. Example output:

    ┌─────────┬───────────────────────┬─────────────┬───────────────────┬──────────┬─────────┐
    │ (index) │       Task Name       │   ops/sec   │ Average Time (ns) │  Margin  │ Samples │
    ├─────────┼───────────────────────┼─────────────┼───────────────────┼──────────┼─────────┤
    │    0    │    'map.forEach()'    │ '1,739,642' │ 574.8305975247757 │ '±8.27%' │ 869822  │
    │    1    │ 'Object.entries(map)' │ '1,074,330' │ 930.8122055151725 │ '±1.02%' │ 537167  │
    └─────────┴───────────────────────┴─────────────┴───────────────────┴──────────┴─────────┘

So my question is, can I deserialize the JSON document to an actual instance of Map<number, string>, complete with its prototype and methods, rather than an object which is just type compatible with that type, in order to take advantage of the faster performance of map.forEach()?

Edit: I don't think this is a duplicate of cast data received from backend to a frontend interface in typescript because that seems to be about mapping the properties of an object with one shape to the properties of a different object with a different shape, whereas I'm trying to create an object from JSON which has a prototype and methods.



2023-11-04

Swift code suspension point in asynchronous & synchronous code [closed]

I keep reading that synchronous function code runs to completion without interruption but asynchronous functions can define potential suspension points via async/await. But my question is why can't synchronous function code be interrupted at any point, can't the OS scheduler suspend the thread/process at any point and give CPU to a higher priority thread/process (just like it happens in any OS)? What am I understanding wrong here?



2023-11-03

Bazel unable to build go targets (version 1.21) due to new workspace mode

What version of rules_go are you using?

0.42.0

What version of gazelle are you using?

0.33.0

What version of Bazel are you using?

6.4.0

Does this issue reproduce with the latest releases of all the above?

yes

What operating system and processor architecture are you using?

MacOS Sonoma / Apple M2 Pro

What did you do?

upgraded to go 1.21, now when I run bazel build //... , all of the external go modules my program utilizes throws errors related to the new workspace mode introduced in 1.18 https://go.dev/doc/tutorial/workspaces:

enter image description here

How do i resolve this? there is no feasible way to add all the modules used in that directory to a go.work file. Is there a way to turn this new workspace mode off?



2023-11-02

How to add a reference in a Sphinx custom directive?

I'm creating a custom directive to display the list of all the available components in pydata-sphinx theme. I try to avoid using the raw directive so I'm building a custom one to remain compatible with the other builders.

Here is the important part of the code:

"""A directive to generate the list of all the built-in components.

Read the content of the component folder and generate a list of all the components.
This list will display some information about the component and a link to the
GitHub file.
"""
from docutils import nodes
from sphinx.application import Sphinx
from sphinx.util.docutils import SphinxDirective


class ComponentListDirective(SphinxDirective):
    """A directive to generate the list of all the built-in components."""

    # ...

    def run(self) -> List[nodes.Node]:
        """Create the list."""
        
        # ... 
        # `component` is a list of pathlib Path
        # `url` is a list of string 
        # `docs` is a list of string

        # build the list of all the components
        items = []
        for component, url, doc in zip(components, urls, docs):
            items.append(nodes.list_item(
                "",
                nodes.reference("", component.name, refuri=url), #this line is the source of the issue
                nodes.Text(f": {doc}")
            ))

        return [nodes.bullet_list("", *items)]

When I try to execute the previous code in my sphinx build I get the following error:

Exception occurred:
  File "/home/borntobealive/libs/pydata-sphinx-theme/.nox/docs/lib/python3.10/site-packages/sphinx/writers/html5.py", line 225, in visit_reference
    assert len(node) == 1 and isinstance(node[0], nodes.image)
AssertionError

This assertion is performed by sphinx if the parent node is not a TextELement. So I tried to wrap things in a Text node:

nodes.Text(nodes.reference("", component.name, refuri=url))

But the I only get the __repr__ of the reference not a real link (I think it's because Text nodes only accept strings)

So I also tried using a TextElement:

nodes.TextElement("", "", nodes.reference("", component.name, refuri=url))

which also raised an error:

Exception occurred:
  File "/home/borntobealive/libs/pydata-sphinx-theme/.nox/docs/lib/python3.10/site-packages/docutils/nodes.py", line 2040, in unknown_departure
    raise NotImplementedError(
NotImplementedError: <class 'types.BootstrapHTML5Translator'> departing unknown node type: TextElement

Does someone know how I should add the link at the start of the bullet list ? If you miss some context, you can find the complete code of the directive here (<100 lines)



2023-11-01

Python: urllib3 Library, unable to make requests. Error: urllib3.exceptions.MaxRetryError:

I am new to Python and experiencing issues with the urllib3 library when running on a linux environment. The problem is that the library is unable to make GET requests to any URL's and the requests were tested with other libraries that do similar stuff. The error that I am getting is

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='google.com', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(\<urllib3.connection.HTTPSConnection object at 0x7f43c27f6cd0\>, 'Connection to google.com timed out. (connect timeout=3)'))

As mentioned before, using any other package such as requests or wget performs the request well. Using a PoolManager instead or changing the retry configs will just keep the script running indefinitely.

The code:

import urllib3

try: 
    response = urllib3.request('GET', 'https://google.com')
    if response.status == 200:
        print(response.data)
    else:
        print(response.status)
except Exception as e:
    print(e)


how keep the hover enabled while the submenu is open

I have a simple table on my website listing devices and their characteristics (in the example at the link below there will be a shortened version of the table).

import "./styles.css";
import { SubMenu } from "./SubMenu";

const subMenuSlice = <SubMenu />;

const nodes = [
  {
    id: "0",
    name: "Samsung Galaxy",
    subMenu: subMenuSlice
  },
  {
    id: "0",
    name: "Iphone",
    subMenu: subMenuSlice
  }
];

export default function App() {
  return (
    <table>
      <tbody>
        {nodes.map((val, key) => (
          <tr key={key}>
            <td>{val.name}</td>
            <td>{val.subMenu}</td>
          </tr>
        ))}
      </tbody>
    </table>
  );
}

https://codesandbox.io/s/romantic-rgb-5t7xkq

As you can see, when you hover over any of the lines, the entire line turns gray and an additional button appears. By clicking on this button the user gets a submenu.

Description of the problem: the problem is that when the user moves the cursor to the submenu, the hover (gray) disappears from the table row. Please tell me how to keep the hover enabled while the submenu is active (open)