2022-10-31

define anagram Python

Could anyone help? Need to define words on being Anagrams. But exactly the same words are not anagram. For example: cat - kitten The words aren't anagrams; cat - act The words are anagrams; cat - cat should be The same words. What should l do in this code to include The same words:

s1 = input("Enter first word:")
s2 = input("Enter second word:")
a = sorted(s for s in s1.lower() if s.isalpha())
b = sorted(s for s in s2.lower() if s.isalpha())
if sorted(a) == sorted(b):
    print("The words are anagrams.")
else:
    print("The words aren't anagrams.")


Spring Boot / Spring Data Jpa: Where is the property "spring.jpa.properties.hibernate.generate_statistics" read in the java code?

I have gone through the Common Application Properties reference page. This contains the list of commonly used spring props and the values of these properties can be defined in application.properties or application.yml.

So, just to explore and find out the convention regarding how and where the above props are declared (read) in java code, I started to search the code for spring-data-jpa related properties. From this SOF answer I could see that spring.datasource.driverClassName property is located at org.springframework.boot.autoconfigure.jdbc.DataSourceProperties i.e at source

Similarly I want to locate the code for other properties like - spring.jpa.properties.hibernate.cache.use_query_cache props and spring.jpa.properties.hibernate.generate_statistics. I looked at spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/orm but could not find any.

Any suggestions are highly appreciated.

I am Just trying to understand spring boot a little deeper.

Note: I could locate spring.jpa but not the above props.



Use logger in custom classes in my console application with Dependency Injection

I have a console application, and I added logger builder as follows in Program.cs:

using var loggerFactory = LoggerFactory.Create(builder =>
    {
        builder
            .AddFilter("Microsoft", LogLevel.Warning)
            .AddFilter("System", LogLevel.Warning)
            .AddConsole();
    });
    ILogger logger = loggerFactory.CreateLogger<Program>();

Now say I have many classes in the project and all want to use logger. How can I make it available to other classes?

For example, I have:

internal class Test
{
    public Test()
    {
        //use logger here
        //logger.LogInformation("Calling Test");
    }
}

In certain kinds of projects such as Azure Functions, the logger is readily injected in the functions, is there a similar way I can do it in Console application?



Add a new column with counter to a df based on an existing column

The data frame I have:

Column A Column B
A 1
na 4
na 5
na 6
B 2
na 4
na 6
na 7
na 8
C 6
na 1
na 5

I am trying to loop through data frame using Python and create a new column C based on Column A's value. Output should look like this;

Column A Column B Column C
A 1 1
na 4 1
na 5 1
na 6 1
B 2 2
na 4 2
na 6 2
na 7 2
na 8 2
A 6 3
na 1 3
na 5 3

Basically adding a counter in column C when there is a new value in Column A after NAs(even if the value in Column A is same as the previous value; in this eg. A comes twice but the counter gives 1st A value 1 and when it again comes then it gives it a value 3).



Is this the right use-case for on demand ISR in Next.js?

So I am building an app that lets you search for a word and gives you the word in other languages. A clone to this site

I just tried Next.js On Demand ISR feature and I did that because fetching from the client using API routes was too slow in my app.

I also thought of generating static pages but that would cause hundreds of thousands of static pages.

Here's how my code looks like now:

export async function getStaticPaths() {
  console.log("[Next.js] Running getStaticPaths for word page");
  return {
    paths: [{ params: { word: "1" } }, { params: { word: "2" } }],
    fallback: "blocking",
  };
}

export async function getStaticProps({ params }: GetStaticPropsContext) {
  const db = await getContainer();

  const { resources } = await db.items
    .query(
      `SELECT c.language, c.Translation, c.id FROM c WHERE c.PrimaryWord='${params?.word}'`
    )
    .fetchAll();

  return {
    props: {
      resources,
    },
  };
}

I saw a difference in the performance using this approach instead of normal API routes and client-side fetching that took so long.

But I don't know if this is the right use-case of on-demand ISR and if this is the right approach to it or not.



OpenCV with CUDA (Ubuntu 22.04), python import error: "undefined symbol: gst_base_src_new_segment"

I attempted to build OpenCV with CUDA support for my 6.1 compute compatible GPU on my Ubuntu 22.04 OS. After lots of struggle I finally got the installation to complete without errors (but with 4 "harmless" repeating warnings) by using OpenCV 4.2.0, Nvidia Toolkit 11.5, gcc and g++ 10.4.0, and python 3.10.6 (in a virtual environment in anaconda). when I check the installation using "pkg-config --libs opencv" I get an error saying it doesn't exist, and when I try to import opencv in the anaconda environment I get the error /lib/x86_64-linux-gnu/libgstapp-1.0.so.0: undefined symbol: gst_base_src_new_segment, which occurs in cv2/__init__.py when executing bootstrap().

The installation results in lots of promising looking .h, .hpp and .so files. I have not installed packages from scratch like this before so I am not sure how to tell if the installation was successful or not. The error I get is the best-case I was able to achieve by experimenting with various versions of gcc, nvidia toolkit, and OpenCV, and I need to run conda install -c conda-forge gcc=10.4.0 or else I get a different error (which is interesting since I followed https://danielhavir.github.io/notes/install-opencv/ for the most part in the installation process, and they did not mention needing to do that. I could not find cv2.so in the list of installed files. I noticed that python 3.10.6 was compiled with gcc 11.2.0. could this be the issue? I have spent a week and a half to get to this point but I am not sure if these are fixable issues within python or if the errors are a sign of a broader issue with the installation and require a new compile.

edit: I discovered the issue is that Ubuntu 22.04 comes standard with gstreamer 1.20 but anaconda-navigator appears to use gstreamer 1.14.0. "gst_base_src_new_segment" was added in 1.18 and this is why anaconda gets stuck. I can either downgrade my system's gstreamer or upgrade my anaconda gstreamer. However, when I try to remove gstreamer from anaconda-navigator (conda uninstall gstreamer) it tries to uninstall anaconda-navigator itself and when I try to upgrade gstreamer using conda install -c anaconda gstreamer it tries to install the 1.14.0 version unless I specify otherwise. Installing gstreamer 1.20.3 using conda install -c fastchan gstreamer does not change the core library version.



2022-10-30

GMock EXPECT_CALL returns FAILED while comparing two char arguments inside the method [duplicate]

As the tittle, I'm using gmock to test my feature. But one of the issue occurred that EXPECT_CALL always check address of 2 char array instead of their value. Below is my code example:

Base.h

//Create singleton class
class Base {
 private:
  static Base* _ptrInstance;
 public:
  static Base* getInstance();
  void sendString(const char* text, int value);
};

Base.cpp

#include "Base.h"
Base* Base::_ptrInstance = NULL;
Base* Base::getInstance(){
   if ( NULL == _ptrInstance ){
      _ptrInstance = new Base();
   }
   return _ptrInstance ;
}
void Base::sendString(const char* text, int value){
 //do something
}

Here is the function that need to be tested: test.cpp

#include <iostream>
#include "Base.h"
int Test(){
 Base* poBase;
 char text[] = "hello_world";
 poBase->getInstance()->sendString(text, 0);
 return 0;
}

my MOCK method:

MOCK_METHOD2(sendString, void (const char* text, int value));

here is my test case:

TEST_F(myTest, sendStringTest){
EXPECT_CALL(*BaseMock, sendString("hello_world", 0));
Test();
}

When I execute my test. It always return above test case FAILED:

Expected arg #0: is equal to 0x56e88a0d pointing to "hello_world"
           Actual: 0xffcb1601 pointing to "hello_world"
         Expected: to be called once
           Actual: never called - unsatisfied and active

With given failure, I though that EXPECT_CALL is comparing argument addresses instead of their value. (Here, text[] address created in Test.cpp and "hello_world" address inside EXPECT_CALL)

Is anyone know how to overcome this failure? Many thanks.



exact gurobi solver for chromatic number of coloring problem [error in the objective]

I'm trying to solve the coloring problem by using gurobi in a lp setting. However, I do something wrong, but don't what exactly.

`!pip install gurobipy'

import networkx as nx
  import gurobipy as gp
from gurobipy import *

import networkx as nx
# create test graph
n = 70
p = 0.6
G = nx.erdos_renyi_graph(n, p)

nx.draw(G, with_labels = True)


# compute chromatic number -- ILP solve
m = gp.Model('chrom_num', env =e)

# get maximum number of variables necessary
k = max(dict(nx.degree(G)).values()) + 1
TEST= range(k)


# create k binary variables, y_0 ... y_{k-1} to indicate whether color k is used
y = []
for j in range(k):
    y.append(m.addVar(vtype=gp.GRB.BINARY, name='y_%d' % j, obj=1))

# create n * k binary variables, x_{l,j} that is 1 if node l is colored with j
x = []
for l in range(n):
    x.append([])
    for j in range(k):
        x[-1].append(m.addVar(vtype=gp.GRB.BINARY, name='x_%d_%d' % (l, j), obj=0))

# objective function is minimize colors used --> sum of y_0 ... y_{k-1}
m.setObjective(gp.quicksum(y[j] for j in TEST), gp.GRB.MINIMIZE)
m.update()

# add constraint -- each node gets exactly one color (sum of colors used is 1)
for u in range(n):
    m.addConstr(gp.quicksum(x[u]) == 1, name='NC_%d')

# add constraint -- keep track of colors used (y_j is set high if any time j is used)
for l in range(n):
    for j in range(k):
        m.addConstr(x[u][j] <= y[j], name='SH_%d_%d')

# add constraint -- adjacent nodes have different colors
for u in range(n):
    for v in G[u]:
        if v > u:
            for j in range(k):
                m.addConstr(x[u][j] + x[v][j] <= 1, name='ADJ_%d_%d_COL_%d')



# update model, solve, return the chromatic number
m.update()
m.optimize()
chrom_num = m.objVal


Sublime Text 4 sql highlight inside a string [closed]

Currently, if I have sql commands inside a python string, Sublime Text highlights it. However if it's an "f string", it does not recognize the sql inside. Is there any way to make it work the same way as a regular string?

This reproduces on builtin Python syntax, which has support for SQL strings highlighting. SQL is detected by non-formatted string starting with uppercase SQL common query identifier (SELECT, DELETE, UPDATE, INSERT, CREATE TABLE, etc.). However, by design this is not triggered for format strings (with f or F prefix) and raw strings with R prefix (the latter is very weird, given that lowercase r is widely used for regexes, while uppercase - for raw strings in any other context). Is there any way to enable embedded SQL highlight for format strings? Maintainer's position is quite understandable (do not format SQL - use placeholders to be formatted by dedicated library), but it's not always the case: sometimes I'm building queries from parts using f-strings, where none of the parts are user input or variables, but just a fixed string for reusability & readability purposes.

example of sql highlight



Find a closest point to another set [closed]

I am trying to create a function to determine which of a set of points (represented as a list of coordinates lists) is closest to another given point. It should return the index in the list where that point appears:

Here is the code I've written so far:

def sq_dist(p1: tuple[int, int], p2: tuple[int, int]) -> int:
    """Square of Euclidean distance between p1 and p2

    >>> sq_dist([2, 3], [3, 5])
    5
    """
    x1, y1 = p1
    x2, y2 = p2
    dx = x2 - x1
    dy = y2 - y1
    return dx*dx + dy*dy



def closest_index(point: tuple[int, int], centroids: list[tuple[int, int]]) -> int:
    count = 0
    soopa_distance = 1000000000000000000000000000000000000000000000000
    
    for p in centroids:
        distance = sq_dist(point,p)
        if distance < soopa_distance:
            soopa_distance = distance
        else:
            count =  count +1
           
    return count

Unfortunately it doesn't return the expected index:

point_set1 = [[4,5],[3,4],[2,3],[1,2],[1,1]]
print(closest_index([0,0], point_set1))
# 0

point_set2 = [[1,2],[2,3],[3,4],[4,5],[1,1]]
print(closest_index([0,0], point_set2))
# 3

# Expected result in both cases: 4

Can you please help me identify and fix the error(s) ?



Excel: #CALC! error (Nested Array) when using MAP functions for counting interval overlaps

I am struggling with the following formula, it works for some scenarios but not in all of them. The name input has the data set that is failing, getting an #CALC! error with the description "Nested Array":

=LET(input, {"N1",0,0;"N1",0,10;"N1",10,20},
  names, INDEX(input,,1), namesUx, UNIQUE(names), dates, FILTER(input, {0,1,1}),
  byRowResult, BYROW(namesUx, LAMBDA(name,
    LET(set, FILTER(dates, names=name),
      startDates, INDEX(set,,1), endDates, INDEX(set,,2), onePeriod, IF(ROWS(startDates)=1, TRUE, FALSE),
      IF(onePeriod, IF(startDates <= IF(endDates > 0, endDates, startDates + 1),0, 1),
        LET(seq, SEQUENCE(ROWS(startDates)),
          mapResult, MAP(startDates, endDates, seq, LAMBDA(start,end,idx,
            LET(incIdx, 1-N(ISNUMBER(XMATCH(seq,idx))),
              startInc, FILTER(startDates, incIdx), endInc, FILTER(endDates, incIdx),
              MAP(startInc, endInc,LAMBDA(ss,ee, N(AND(start <= ee, end >= ss))))
              ))),
              SUM(mapResult)))
    ))), HSTACK(namesUx, byRowResult)
)

If we replace the input values in previous formula with the following range: A2:C4, in G1:H1 would be the expected output:

sample input and output

Provided also a graphical representation of the intervals to visualize the interval and its corresponding overlap. From the screenshot we have 2 overlaps.

Let's explain the input data and what the formula does:

Input data

  • First column: N1, N2, N3, represents names
  • Second Column: Start of the interval (I am using numeric values, but in my real situation will be dates)
  • Third Column: End of the interval (I am using numeric values, but in my real situation will be dates)

Formula

The purpose of the formula is to identify for each unique names, how many intervals overlap. The calculation go by each row (BYROW) of the unique names and for each pair of start-end values, counts the overlaps with respect the other start-end values. I use FILTER to exclude the current start-end pair with the following condition for example: FILTER(startDates, incIdx) and I tested it works properly.

The condition to exclude the start data of the current name of the iteration of BYROW is the following:

1-N(ISNUMBER(XMATCH(seq,idx)))

and used as second input argument of the FILTER function.

The rest is just to check the overlap range condition.

I separate the logic when a name has only one interval, from the rest because the calculation is different, For a single interval I just want to check that the end date comes after start date and treat the special case of 0. This particular case I tested it works.

Testing and workarounds

I already isolated where is the issue and when it happens. The problem happens in the following call:

MAP(startInc, endInc,LAMBDA(ss,ee, N(AND(start <= ee, end >= ss))))

when startInc and endInc has more than one row. It has nothing to do with the content of the LAMBDA function. I can use:

MAP(startInc, endInc,LAMBDA(ss,ee, 1))

and still fails. The problem is with the input arrays: startInc, endInc. If I use any other array for example the following ones it doesn't works:

MAP(seq,LAMBDA(ss, 1))

Similar result using names, startDates, etc, even if I use: {1;2;3} fails. If use use idx it works, because it is not an array. Therefore the error happens with any type of array or range.

I have also tested that the input arguments are correct having the correct shape and values. For example replacing the MAP function with: TEXTJOIN(",",, startInc)&" ; " (and also with endInc) and replacing SUM with CONCAT to concatenate the result.

In terms of input data I tested the following scenarios:

{"N1",0,0;"N1",0,10} -> Works
{"N1",0,0;"N1",0,10;"N2",10,0;"N2",10,20;"N3",20,10} -> Works
{"N1",0,0;"N1",0,10;"N1",10,20} -> Error
{"N1",0,0;"N1",0,10;"N1",10,0} -> Error
{"N1",0,0;"N1",0,10;"N1",10,0;"N1",20,10} -> Error
{"N1",0,0;"N1",0,10;"N2",10,0;"N2",10,20;"N2",20,10} -> Error

The case that works are because it goes to the MAP function an array of size 1 (number of duplicated names is less than 3)

I did some research on internet about #CALC! error, but there is no too much details about this error and it is provided only a very trivial case. I didn't find any indication in the limit of nested calls of the new arrays functions: BYROW, MAP, etc.

Conclusion, it seems that the following nested structure produce this error:

=MAP({1;2;3}, LAMBDA(n, MAP({4;5;6}, LAMBDA(s, TRUE))))

even for a trivial case like this.

On contrary the following situation works:

=MAP({1;2;3}, LAMBDA(n, REDUCE("",{4;5;6}, LAMBDA(a,s, TRUE))))

because the output of REDUCE is not an array.

Any suggestion on how to circumvent this limitation in my original formula?, Is this a real situation of an array that cannot use another array as input?, Is it a bug?



2022-10-29

Mikro Orm with nestjs does not load entities automatically

I'm trying to make a nestjs project with Mikro Orm refrencing load-entities-automatically. But Mikro Orm does not create tables automatically... The following codes are my settings

AppModule.ts

@Module({
  imports: [
    MikroOrmModule.forRoot({
      type: 'postgresql',
      host: 'localhost',
      user: 'test',
      password: 'test',
      dbName: 'test',
      port: 5440,
      autoLoadEntities: true,
      entities: ['../entity/domain'],
      entitiesTs: ['../entity/domain'],
      allowGlobalContext: true,
      schemaGenerator: {
        createForeignKeyConstraints: false,
      },
    }),
    UserModule,
  ],
  controllers: [],
  providers: [],
})
export class AppModule {}

UserModule

@Module({
  imports: [UserEntityModule],
  controllers: [UserController],
  providers: [UserService, UserRepository],
})
export class UserModule {}

UserRepository

import { InjectRepository } from '@mikro-orm/nestjs';
import { EntityRepository } from '@mikro-orm/postgresql';
import { Injectable } from '@nestjs/common';

@Injectable()
export class UserRepository {
  constructor(
    @InjectRepository(User)
    private readonly userRepository: EntityRepository<User>,
  ) {}

  async save(req: UserSaveRequest) {
    const response = await this.userRepository
      .createQueryBuilder()
      .insert(req)
      .execute();

    return response;
  }
}

UserEntityModule

import { MikroOrmModule } from '@mikro-orm/nestjs';

@Module({
  imports: [MikroOrmModule.forFeature([User])],
  exports: [MikroOrmModule],
})
export class UserEntityModule {}

User

import { Entity, Property } from '@mikro-orm/core';

@Entity({ tableName: 'users' })
export class User extends BaseEntity {
  @Property({ comment: "user's nickname" })
  nickname: string;
}

BaseEntity

export abstract class BaseEntity {
  @PrimaryKey()
  id: number;
}

Code is simple but too long... How can I solve it?



Is it Possible to create one form using PySimpleGUI to fetch data from excel table?

Dears,

Is it Possible to create one form using PySimpleGUI to fetch data from excel table ? I need the reverse process of what has been done in Topic : https://github.com/Sven-Bo/data-entry-form-pysimplegui

Thanks in advance



plotly: bar stacking graph

I have a Dataframe with data as follows:

data={'A': {'2020_01': 3, '2020_02': 3, '2020_03': 1, '2020_04': 3, '2020_05': 1},
 'B': {'2020_01': 0, '2020_02': 0, '2020_03': 3, '2020_04': 0, '2020_05': 2},
 'other': {'2020_01': 0,
  '2020_02': 0,
  '2020_03': 3,
  '2020_04': 0,
  '2020_05': 2},
 'total': {'2020_01': 3,
  '2020_02': 3,
  '2020_03': 7,
  '2020_04': 3,
  '2020_05': 5}}
df = pd.DataFrame(data)

I would like to represent with ploty in X the dates and in y stacked values of A, B, other

for one single bar I can do:

import plotly.express as px
fig = px.bar(df, x=df.index, y='A', text_auto=True,
             labels={'A':'bananas'}, height=400)
fig.show()

How to I have to proceed? I checked a bunch of documentation at no avail: https://community.plotly.com/t/plotly-express-bar-ignores-barmode-group/31511 https://plotly.com/python/bar-charts/ Plotly px.bar not stacking with barmode='stack' ...



Laravel 9.x proper organization of .mobileconfig file generation through a zsh script for further download

I am refactoring my project using Laravel 9, which was originally written in native PHP.

In the past, I have implemented simple method of generating some unsigned.mobileconfig file, then saving it to the download folder, signing it with my certs and creating a link for the user to download it. Then i deleted old configurations via cron once a week.

I have following questions:

  1. Is it possible to create and simultaneously sign a similar configuration file "on the fly" in Laravel, without saving it on the backend? If not, what is the best way to organize this process so that the link will be available via api routing?

  2. Where is the best place to store script files in the standard Laravel project structure? Or is it easier to store them on the server side and run them using, for example, this Symfony package?

Some words about my new project - i'm using default Laravel folder structure with Spatie DTO's and business-logic organized in service layer ('tiny controllers'). Main routings are written in routes/api.php.

My project structure:

app/Console
   /Exceptions
   /Helpers
     Helper.php <-- Class for methods like generatePassword(), generateRandomString() etc
   /Http
     /Controllers <-- Controllers for api
       UserController.php
     /Middleware
   /Providers
   /MyApp <-- Main folder for my application
     /Builders <-- Builders for SpatieDTO data-classes
       UserDataBuilder.php
     /DTO
       UserData.php <-- Spatie-extended data-classes
     /User
       /Services <-- Services for implementing main business-logic
         UserService.php
       /Model
         User.php
     /Tariff
       /Services
         TariffService.php
       /Model
         Tariff.php

I found only a few articles on similar cases, for example, this and this, but I could not determine the logic of my further actions.

Thanks!

Here is some review of this process:

GenerateConfig.php

// Variables for script     
putenv("USERID=$userId");
putenv("PASSWORD=$password");
// Some salt for configuration file name
putenv("SALT=$random");

$unsignedConfig = "zsh -c /path/to/mobileconfig.sh";
shell_exec($unsignedConfig);
$linkForSignedCfgDownloading = 'example.com/path/to/config/'cfg' . $random . '.mobileconfig';

<a href="' . linkForSignedCfgDownloading .'">Download your configuration</a>

mobileconfig.sh

#!/bin/zsh

# Putting some needed variables
USERID=$USERID
PASSWORD=$PASSWORD

cat << EOF >/path/to/config/download/unsigned.mobileconfig
<?xml version="1.0" encoding="UTF-8"?>
<plist version="1.0">
<dict>
    # Here some XML Data needed to be generated
</dict>
</plist>
EOF

# Next step after generating unsigned config file
source /path/to/next/script/signconfig.sh

signconfig.sh

#!/bin/zsh

USERID=$USERID
SALT=$SALT

# Signing new unsigned file

openssl smime -sign -in /path/to/config/download/unsigned.mobileconfig 
-out /path/to/config/download/cfg${SALT}.mobileconfig 
-signer /path/to/server/cert.pem -inkey /path/to/server/privkey.pem 
-certfile /path/to/server/chain.pem 
-outform der -nodetach

# Deleting unnecessary unsigned config file 

rm -f /path/to/config/download/unsigned.mobileconfig



How to handle "curl -XGET http://testdomain.com --request-target example.com:80" with Gorilla

How do I create a Path or PathPrefix in Gorilla http framework if the user forces a URI that does not begin with a slash? This causes a direct to whatever the user specifies, e.g.:

# curl -Iv http://testdomain.com --request-target example.com:80
* Added testdomain.com to DNS cache
* Hostname testdomain.com was found in DNS cache
> HEAD example:80 HTTP/1.1
> Host: testdomain.com
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Location: example:80
Location: example:80
< Date: Fri, 28 Oct 2022 18:53:03 GMT
Date: Fri, 28 Oct 2022 18:53:03 GMT
<
* Connection #0 to host REDACTED left intact

Currently, it does not match anything in my route setup, yet generates a redirect. Middle-ware is also not catching anything to act on:

func filterMW(next http.Handler) http.Handler {
    logrus.Infof("In filterMW")
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    dump, _ := httputil.DumpRequest(r, false)
    logrus.Infof("%v", string(dump))
    next.ServeHTTP(w, r)
    })
}

httpRouter := mux.NewRouter()
httpRouter.Use(filterMW)

Thoughts?

Edit: Updated to include more details!



User prefix, code not working in discord.py rewrite branch

guys! I'm pretty new to dpy btw How do you make a user prefix? For example Asios prefix is ! Which is the default prefix , but i want to give Bsio a different prefix. I'm stuck on how to get the prefix, i couldn't find any tutorial or in the docs.

def pr(ctx, message): if message.author == 0000000: prefix = "!" else: prefix = "_" It just doens't work.



2022-10-28

How to find the maximum sum of elements of a list with a given minimum distance between them

I have been looking for a way to find the possible combinations of a list, given a minimal distance of 3 between all the numbers.

Suppose we have

list = [23, 48, 10, 55, 238, 11, 12, 23, 48, 10, 55, 238, 11, 12, 23, 48, 10, 55, 238, 11]

The best possible combination would be 23 + 238 + 238 + 238 = 737.

I've tried parsing the list and selecting each time the max of the split list[i:i+4], like so :

23 -skip three indexes -> max of [238, 11, 12, 23] : 238 -skip three indexes -> max of [48, 10, 55, 238] : 238 skip three indexes -> max of [48, 10, 55, 238] : 238

This worked with this case, but not with other lists where I couldn't compare the skipped indexes.

Any help would be greatly appreciated.



Automation enable "Chat External, Drive Sharing Outside, Sites Creation and Editing" for OU

My organization just switched to Google Workspace, I have created my organization's OUs in Workspace as follows:

Example:

khanhhoaedu(level 1)
|
--- school1(level 2)
|    |
|     ----- gv(level 3)
|     ----- it(level 3)
|     ----- sp(level 3)
--- school2(level 2)
|    |
|     ----- gv(level 3)
|     ----- it(level 3)
|     ----- sp(level 3)
--- school3(level 2)
|    |
|     ----- gv(level 3)
|     ----- it(level 3)
|     ----- sp(level 3)
---- district1(level 2)
|    |
|     ----- school4(level 3)
|            |
|             ----- gv(level 4)
|             ----- it(level 4)
|             ----- sp(level 4)
|    |
|     ----- school5(level 3)
|            |
|             ----- gv(level 4)
|             ----- it(level 4)
|             ----- sp(level 4)
---- district2(level 2)
|    |
|     ----- school6(level 3)
|            |
|             ----- gv(level 4)
|             ----- it(level 4)
|             ----- sp(level 4)
|    |
|     ----- school7(level 3)
|            |
|             ----- gv(level 4)
|             ----- it(level 4)
|             ----- sp(level 4)

Right now, I want turn on these services for all OU “gv(level 3) and gv(level 4)”

Since there are so many OUs that need to be turned on, I would like to ask if there is a way to automate this?

I tried to learn about Google App Script to automate this, but I am a Network Administrator, and studying programming languages takes time for me.



Postgres not using index when ORDER BY and LIMIT when LIMIT above X

I have been trying to debug an issue with postgres where it decides to not use an index when LIMIT is above a specific value.

For example I have a table of 150k rows and when searching with LIMIT of 286 it uses the index while with LIMIT above 286 it does not.

LIMIT 286 uses index

db=# explain (analyze, buffers) SELECT * FROM tempz.tempx AS r INNER JOIN tempz.tempy AS z ON (r.id_tempy=z.id) WHERE z.int_col=2000 AND z.string_col='temp_string' ORDER BY r.name ASC, r.type ASC, r.id ASC LIMIT 286;
                                                                          QUERY PLAN                                                                           
---------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.56..5024.12 rows=286 width=810) (actual time=0.030..0.992 rows=286 loops=1)
   Buffers: shared hit=921
   ->  Nested Loop  (cost=0.56..16968.23 rows=966 width=810) (actual time=0.030..0.977 rows=286 loops=1)
         Join Filter: (r.id_tempy = z.id)
         Rows Removed by Join Filter: 624
         Buffers: shared hit=921
         ->  Index Scan using tempz_tempx_name_type_id_idx on tempx r  (cost=0.42..14357.69 rows=173878 width=373) (actual time=0.016..0.742 rows=910 loops=1)
               Buffers: shared hit=919
         ->  Materialize  (cost=0.14..2.37 rows=1 width=409) (actual time=0.000..0.000 rows=1 loops=910)
               Buffers: shared hit=2
               ->  Index Scan using tempy_string_col_idx on tempy z  (cost=0.14..2.37 rows=1 width=409) (actual time=0.007..0.008 rows=1 loops=1)
                     Index Cond: (string_col = 'temp_string'::text)
                     Filter: (int_col = 2000)
                     Buffers: shared hit=2
 Planning Time: 0.161 ms
 Execution Time: 1.032 ms
(16 rows)

vs.

LIMIT 287 doing sort

db=# explain (analyze, buffers) SELECT * FROM tempz.tempx AS r INNER JOIN tempz.tempy AS z ON (r.id_tempy=z.id) WHERE z.int_col=2000 AND z.string_col='temp_string' ORDER BY r.name ASC, r.type ASC, r.id ASC LIMIT 287;
                                                                         QUERY PLAN                                                                          
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=4976.86..4977.58 rows=287 width=810) (actual time=49.802..49.828 rows=287 loops=1)
   Buffers: shared hit=37154
   ->  Sort  (cost=4976.86..4979.27 rows=966 width=810) (actual time=49.801..49.813 rows=287 loops=1)
         Sort Key: r.name, r.type, r.id
         Sort Method: top-N heapsort  Memory: 506kB
         Buffers: shared hit=37154
         ->  Nested Loop  (cost=0.42..4932.59 rows=966 width=810) (actual time=0.020..27.973 rows=51914 loops=1)
               Buffers: shared hit=37154
               ->  Seq Scan on tempy z  (cost=0.00..12.70 rows=1 width=409) (actual time=0.006..0.008 rows=1 loops=1)
                     Filter: ((int_col = 2000) AND (string_col = 'temp_string'::text))
                     Rows Removed by Filter: 2
                     Buffers: shared hit=1
               ->  Index Scan using tempx_id_tempy_idx on tempx r  (cost=0.42..4340.30 rows=57959 width=373) (actual time=0.012..17.075 rows=51914 loops=1)
                     Index Cond: (id_tempy = z.id)
                     Buffers: shared hit=37153
 Planning Time: 0.258 ms
 Execution Time: 49.907 ms
(17 rows)

Update:

This is Postgres 11 and VACUUM ANALYZE is run daily. Also, I have already tried to use CTE to remove the filter but the problem is the sorting specifically

->  Sort  (cost=4976.86..4979.27 rows=966 width=810) (actual time=49.801..49.813 rows=287 loops=1)
         Sort Key: r.name, r.type, r.id
         Sort Method: top-N heapsort  Memory: 506kB
         Buffers: shared hit=37154

Update 2:

After running VACUUM ANALYZE the database starts using the index for some hours and then it goes back to not using it.



Android Studio Can't Connect to Internet/Download SDK on Windows 11

Really frustrating issue: i'm trying to install and run Android Studio on Windows 11, but every time I open it I get a message that some broken proxy setting is preventing the Android SDK from being downloaded. I'm not using a proxy at all as far as I know.

I've read lots of posts explaining that it's a Windows Firewall issue, but nothing I've tried has worked. What I've tried so far:

  • Reinstalled Android Studio
  • Added Android Studio inbound & outbound exceptions to Windows Firewall
  • Disabled Windows Firewall entirely
  • Tried installing on different users on same Machine (same error)
  • Installed Electric Eel Beta version of Android Studio (instead of stable channel's Dolphin, same error)
  • Flushed DNS cache on my PC
  • Closing the first run wizard and trying to manually download the SDK from SDK Manager in Android Studio (always "Unavailable")

Nothing has worked. I see this user here faced my exact issue and they said they were able to fix it by "relaxing restrictions with Windows Firewall". I'm not sure what that means beyond what I've already tried.

As far as I know I'm not actively connecting to the internet on my machine via any Proxies or VPNs. Maybe my torrenting software changed settings somewhere? It's just strange because I've never had this issue before, even with torrenting software running/installed.

What should I do to get Android Studio to correctly download the Android SDK?

First time run always complains about proxy settings (what proxy??)Auto proxy settings don't work, fail to connect to https://cnn.com SDK always shows as "Unavailable" SDK Manager shows strange error abut filesystem root. Editing does nothing (leads to SDK showing as "Unavailable" for download)



T test between price and size

kcovkcvxcv xcvxc vxc ksmvsdkvsdm lvsdmv dmmdsm]vsd vsdv sdmlvsdvsd vsdm



ClassCastException: ApiException cannot be cast to RevolvableApiException after updating location library to 21 version

I have updated location services libraries in my App to the latest 21 version:
com.google.android.gms:play-services-location:21.0.0
and it breaks the logic for enabling location settings on the users phones.

I found updated page with documentation about this process: https://developers.google.com/android/reference/com/google/android/gms/location/SettingsClient,
and using code below for triggering popup which should ask user to allow enabling location access on the phone:

val locationRequest = LocationRequest.Builder(Priority.PRIORITY_HIGH_ACCURACY, 10000)
                .setMinUpdateIntervalMillis(5000).build()

            val builder = LocationSettingsRequest.Builder().addLocationRequest(locationRequest)

            val client: SettingsClient = LocationServices.getSettingsClient(activity)
            val task: Task<LocationSettingsResponse> = client.checkLocationSettings(builder.build())
            task.addOnCompleteListener {
                try {
                    task.getResult(ApiException::class.java)
                } catch (exception: ApiException) {
                    when (exception.statusCode) {
                        LocationSettingsStatusCodes.RESOLUTION_REQUIRED -> {
                            try {
                                val resolvable = exception as ResolvableApiException
                                resolvable.startResolutionForResult(
                                    activity,
                                    1
                                )
                            } catch (e: Exception) {
                                e.printStackTrace()
                            }
                        }
                        LocationSettingsStatusCodes.SETTINGS_CHANGE_UNAVAILABLE -> {
                           
                        }
                    }
                }
            }

but that code from documentation throw a ClassCastException on this line:
val resolvable = exception as ResolvableApiException,
can't figure out how to deal with a latest location updates, thanks in advance for any help.



2022-10-27

How to sort time column in jqgrid?

We are binding a table using jqgrid. We have the first column start as a time column with a 12-hour format. We are facing an issue with sorting this data. The data is sorted correctly but it is not taking am/pm into consideration. Below is our code for binding the jqgrid:

var newFieldsArray =
        [
            { name: "ID", title: "ID", type: "number", width: "50px", visible: false },
            {
                name: "TimeStart", title: "Start", type: "customTime", width: "100px", validate: "required",
                sorttype: "date",
                formatter : {     
                    date : {       
                    AmPm : ["am","pm","AM","PM"],       
                    }     
                },
                // datefmt: "m/d/Y h:i A",
                //sorttype: 'datetime', formatter: 'date', formatoptions: {newformat: 'd/m/y', srcformat: 'Y-m-d H:i:s'},
                insertTemplate: function () {
                    var $result = jsGrid.fields.customTime.prototype.insertTemplate.call(this); // original input

                    $result.val(varendTime);

                    return $result;
                },
                itemTemplate: function (value, item) {
                    return "<b style='display:none'>" + Date.parse(item.StartDate) + "</b><span>" + (item.TimeStart) + "</span>";
                }
            },
            {
                name: "TimeEnd", title: "End", type: "customTime", width: "100px", validate: "required",sorttype: "date", datefmt: "h:i"
            },
            { name: "TimeTotal", title: "Time", type: "text", width: "50px", readOnly: true },
            {
                name: "CoilPO", title: "Coil PO", type: "text", width: "50px", validate: "required",
                insertTemplate: function () {
                    var $result = jsGrid.fields.text.prototype.insertTemplate.call(this); // original input

                    $result.val(varlot);

                    return $result;
                }
            },
            { name: "Joints", title: "Joints", type: "integer", width: "60px" },
            { name: "CommercialGrade", title: "Commercial Grade", type: "integer", width: "80px" },
            { name: "QAHold", title: "QA Hold", type: "integer", width: "60px" },
            { name: "Rejected", title: "Reject", type: "integer", width: "60px" },
            { name: "ActionTaken", title: "Reason of Delay / Action Taken", type: "text", width: "120px" },
            {
                name: "ClassId", title: "Class",
                type: "select", items: classDataArr,//classData.filter(function(n){return classdt.indexOf(n.Id) != -1 }),//classData,
                valueField: "Id", textField: "Title",
                insertTemplate: function () {
                    debugger;
                    var taxCategoryField = this._grid.fields[12];
                    var $insertControl = jsGrid.fields.select.prototype.insertTemplate.call(this);

                    var classId = 0;
                    var taxCategory = $.grep(voiceData, function (team) {
                        return (team.ClassId) === classId && (team.StationId) == parseInt($('#ddlEquipmentName').val());
                    });
                    taxCategoryField.items = taxCategory;
                    $(".tax-insert").empty().append(taxCategoryField.insertTemplate());

                    $insertControl.on("change", function () {
                        debugger;
                        var classId = parseInt($(this).val());
                        var taxCategory = $.grep(voiceData, function (team) {
                            return (team.ClassId) === classId && (team.StationId) == parseInt($('#ddlEquipmentName').val());
                        });
                        taxCategoryField.items = taxCategory;
                        $(".tax-insert").empty().append(taxCategoryField.insertTemplate());
                    });

                    return $insertControl;
                },
                editTemplate: function (value) {
                    var taxCategoryField = this._grid.fields[12];
                    var $editControl = jsGrid.fields.select.prototype.editTemplate.call(this, value);

                    var changeCountry = function () {
                        var classId = parseInt($editControl.val());
                        var taxCategory = $.grep(voiceData, function (team) {
                            return (team.ClassId) === classId && (team.StationId) == parseInt($('#ddlEquipmentName').val());
                        });
                        taxCategoryField.items = taxCategory;
                        $(".tax-edit").empty().append(taxCategoryField.editTemplate());
                    };
                    debugger;
                    $editControl.on("change", changeCountry);
                    changeCountry();
                    return $editControl;
                }
            },
            {
                name: "VoiceId", title: "Voice", type: "select", items: voiceData,
                valueField: "Id", textField: "Title", width: "120px", validate: "required",
                insertcss: "tax-insert",
                editcss: "tax-edit",
                itemTemplate: function (teamId) {
                    var t = $.grep(voiceData, function (team) { return team.Id === teamId; })[0].Title;
                    return t;
                },
            },
            { name: "Remarks", title: "Remarks", type: "text", width: "110px" },
            { name: "control", type: "control" }
        ];

    hoursGrid.jsGrid("option", "fields", newFieldsArray);

Below is two screenshots of data that appear when we sort: Ascending

Descending

Can someone tell me what we are doing wrong?



Weird rb.velocity.y

When I'm moving left or right rb.velocity.y changes to weird numbers. How can I get rid of this? It really annoing because I want to change the gravity when player is jumping and because of this bug gravity changes even if player is moving left and right.

My code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System; 

public class movePlayer : MonoBehaviour
{
    public ParticleSystem dust;
    public Animator animator;
    [SerializeField] public float speed;
    private float moveInput;
    public float jump;  
    private Rigidbody2D rb;
    public BoxCollider2D coll;
    private bool facingRight = true;

    [SerializeField]private bool isGrounded;
    public Transform GroundCheck;
    public LayerMask WhatIsGround;
    public float fallMultiplier = 3.5f;
    public float lowJumpMultiplier = 3f;
    private bool jumped = false;

    void Start()
    {
        rb = GetComponent<Rigidbody2D>();
        coll = GetComponent<BoxCollider2D>();
    }

    void Update()
    {
        //isGrounded = Physics2D.OverlapCircle(GroundCheck.position,checkRadius,WhatIsGround);
        // if (Input.GetKey(KeyCode.RightArrow)) moveInput = 1;
        // else if (Input.GetKey(KeyCode.LeftArrow)) moveInput = -1;
        // else moveInput = 0;
        moveInput = Input.GetAxis("Horizontal");
        if (!Input.GetKey(KeyCode.RightArrow) && !Input.GetKey(KeyCode.LeftArrow) && !Input.GetKey(KeyCode.A) && !Input.GetKey(KeyCode.D)) {
            moveInput = 0;
        }
        Debug.Log(moveInput);
        animator.SetFloat("speed", MathF.Abs(moveInput));

        if(facingRight == false && moveInput > 0){
            Flip();
            CreateDust();
        }
        else if(facingRight == true && moveInput < 0){
            Flip();
            CreateDust();
        }

        if(isGrounded == false)
        {
            animator.SetBool("isJumping", true);
        }

        if(isGrounded == true)
        {
 
            animator.SetBool("isJumping", false);
        }
        
        if ((Input.GetKeyDown(KeyCode.Space) || Input.GetKeyDown(KeyCode.UpArrow))  && isGrounded == true)
        {
            CreateDust();
            jumped = true;
            
        }  

    }

    void FixedUpdate(){

        isGrounded = Physics2D.BoxCast(coll.bounds.center,coll.bounds.size, 0, Vector2.down, .1f,WhatIsGround);

        rb.velocity = new Vector2(moveInput * speed, rb.velocity.y);

        if(jumped){
            rb.AddForce(Vector2.up * jump, ForceMode2D.Impulse);
            jumped = false;
        }

        if(rb.velocity.y < 0){
            rb.gravityScale = fallMultiplier;
        }
        else if(rb.velocity.y > 0 && !(Input.GetKeyDown(KeyCode.Space) || Input.GetKeyDown(KeyCode.UpArrow))){
            rb.gravityScale = lowJumpMultiplier;
        }
        else{
            rb.gravityScale = 1f;
        }  
    }

    void Flip(){
        facingRight = !facingRight;
        Vector3 Scaler = transform.localScale;
        Scaler.x *= -1;
        transform.localScale = Scaler;
    }

    void CreateDust(){
        dust.Play();
    }
}

I have tried changing how I mover the player but it didn't change anything.



SwiftUI - Access List Selection in Another View

I want to access the list selection in another view.

import SwiftUI

struct ExerciseSelectorView: View {
    @Environment(\.managedObjectContext) var viewContext
    @Environment(\.dismiss) var dismiss
    @FetchRequest(sortDescriptors: []) var exercises: FetchedResults<Exercise>
    @State var selectedItems = Set<Exercise>()
   
    var body: some View {
        NavigationView {
            
            VStack {
                Button("Add") {
                createExerciseSet()
                }
                List(selection: $selectedItems) {
                    ForEach(exercises, id: \.self) { e in
                        Text(e.exercisename)
                    }
                }
                .environment(\.editMode, .constant(EditMode.active))
                .navigationBarTitle(Text("Selected \(selectedItems.count) rows"))
                .toolbar { EditButton() }
                
            }
            
        }
    }

I'm pretty sure selectedItems can't be a @State var, but when I set it to Binding, the error is "No exact matches in call to initializer" When I try to set it to @ObservedObject, the error is "Generic struct 'ObservedObject' requires that 'Set' conform to 'ObservableObject'" So I'm not sure how to handle an array of a class. I just want to be able to use the array of selectedItems in a different view. Thank you in advance!!!!

Below is the second view where I want to use the selectedItems, but I am getting "Cannot find 'selectedItems' in scope"

import SwiftUI

struct ExSetView: View {
    
    @Environment(\.managedObjectContext) var viewContext
    @Environment(\.dismiss) var dismiss
    @FetchRequest(sortDescriptors: []) var exsets: FetchedResults<ExerciseSet>
    
    var body: some View {
        NavigationView {
            VStack (alignment: .leading) {
                ForEach(selectedItems) { e in
                    
                    NavigationLink(
                        destination: ExSetInputView(selectedItems: e),
                        label: {
                            Text(e.exercise.exercisename)
                        }
                    )}
            }
            
            
        }
        
    }
}

UPDATE:

import SwiftUI

struct ExSetView: View {
    
    @Environment(\.managedObjectContext) var viewContext
    @Environment(\.dismiss) var dismiss
 FetchedResults<ExerciseSet>
    @State var selectedItems = Set<Exercise>()
    
    var body: some View {
        NavigationView {
            VStack (alignment: .leading) {
               
                  Text("Set Count: \(selectedItems.count)")
         }
      }
}


How to loop through rows of data in .CSV file in Cypress test?

In my Cypress test, I am trying to read data from a CSV file & loop through each row.

Below is the contents of my fixtures/logins.csv file:

| username | password    |
| john     | pword1      |
| james    | myPassword  |
| frank    | newPassword |

I want to loop through each row of data to log into an application.

Here is my latest code, it just logs the CSV file data as a string currently:

const csvUploaded = 'cypress/fixtures/logins.csv'

it('CSV test', () => {
    cy.readFile(csvUploaded, 'utf-8').then((txt) => {
        cy.log(txt)
        cy.log(typeof txt)
    });
});

txt is the below string at the moment:

username,password john,pword1 james,myPassword frank,newPassword


How to use Update Panel for ASP.Net and Javascript?

I have a textbox with onkeyup event that call JavaScript function GetEmployee() this function gets items from database where full name contains value typed on textbox.

Everything is working, but I want to add a button save with server-side functionality which is doing an error on post back .

I want to use asp:updatepanel as solution for the error but don't know how to do it .

Here is my code

   <asp:TextBox ID="txtSelectUser" AutoPostBack="true" runat="server" class="InputFormStyle form-control" OnKeyUp="GetEmployee();"></asp:TextBox>
    <asp:ListBox ID="lstEmployee" runat="server" onchange="SelectUser();"></asp:ListBox>
       <asp:Button Text="Save" runat="server" OnClick="btnadd_Click" ID="btnadd"/>
<script type="text/javascript">
    var selectedUserId = 0;
    $(document).ready(function () {
        $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].style.display = "none";

        $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].onchange()
        {
            SelectUser();
        }
    });
    function SelectUser() {
        for (var i = 0; i < $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].options.length; i++) {
            if ($("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].options[i].selected) {
               
                $("#ContentPlaceHolder1_EmployeeControl_txtSelectUser")[0].value = $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].options[i].text;
                selectedUserId = $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].options[i].value;
                $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].style.display = "none";
                return;
            }
        }
    }
    function GetEmployee() {
        selectedUserId = 0;
        var FN = $("#ContentPlaceHolder1_EmployeeControl_txtSelectUser")[0].value;
        if (FN == "") {
            $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].style.display = "none";
        }
        else {
            $.ajax({
                url: '..../GetDataByFn',
                data: JSON.stringify({
                    fullname: $("#ContentPlaceHolder1_EmployeeControl_txtSelectUser")[0].value
                }),

                type: 'post',
                contentType: "application/json; charset=utf-8",
                dataType: "json",
                success: function (data) {
                    $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].options.length = 0;
                    if (data.d.length != 0) {
                        for (var i = 0; i < data.d.length; i++) {
                            var opt = document.createElement("option");
                            $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].options.add(opt);
                            opt.text = data.d[i].FullNameEN;
                            opt.value = data.d[i].Id;
                            $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].options.add(opt);
                        }
                        $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].style.display = "inline-block";
                    }
                    else {
                        $("#ContentPlaceHolder1_EmployeeControl_lstEmployee")[0].style.display = "none";
                    }
                },
                error: function (error) {
                    console.log(JSON.stringify(error));
                }

            });
        }
    }
</script>


VS Code - Cannot debug blazor wasm client projetc : Unable to lauch browser "The URL's protocol must be one of ws, wss or ws+inix"

I have downloaded the Samples to accompany the official Microsoft Blazor documentation

https://github.com/dotnet/blazor-samples

In VS Code then I open the folder

..\blazor-samples-main\6.0\BlazorSample_WebAssembly

I let VS Code add the assets in the subfolder .vscode launch.json task.json

I have modified the launch.json to be

{
"version": "0.2.0",
"configurations": [
    {
        "name": "Launch and Debug Standalone Blazor WebAssembly App",
        "type": "blazorwasm",
        "request": "launch",
        "cwd": "${workspaceFolder}",
        "url": "https://localhost:5001"
    }
]}

and I have modified the launchSettings.json located in the Properties folder to be

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:57953",
      "sslPort": 44307
    }
  },
  "profiles": {
    "blazorwasm": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": true,
      "inspectUri": "{wsProtocol}://{url.hostname}:{url.port}/_framework/debug/ws-proxy?browser={browserInspectUri}",
      "applicationUrl": "https://localhost:5001;http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "BlazorSample": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": true,
      "inspectUri": "{wsProtocol}://{url.hostname}:{url.port}/_framework/debug/ws-proxy?browser={browserInspectUri}",
      "applicationUrl": "https://localhost:5001;http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "inspectUri": "{wsProtocol}://{url.hostname}:{url.port}/_framework/debug/ws-proxy?browser={browserInspectUri}",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

I then run the Run > Start Debugging F5 but I get the error described in the title and captured in the screenshots

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

Has anyone ever encountered this problem in Visual Studio Code and knows how to fix it? Have I missed anything or done anything wrong? The same exact code with the blazorwasm configuration can be debugged successfully on Visual Studio 2022 but it fails as illustrated in Visual Studio Code and I do not understand why?

Some references I have used are below but I have not been able to understand the meaning of the error message below. I have tried to use Edge by switching the type in the blazorwasm but that causes Edge to crash right off the bat.

Unable to lauch browser "The URL's protocol must be one of ws, wss or ws+inix"

https://learn.microsoft.com/en-us/aspnet/core/blazor/debug?view=aspnetcore-6.0&tabs=visual-studio-code#debug-a-standalone-blazor-webassembly-app

https://dev.to/sacantrell/vs-code-and-blazor-wasm-debug-with-hot-reload-5317



2022-10-26

Insert Array Data from File into MYSQL using PHP

I am having trouble trying to connect to a MySQL DB to insert certain JSON values from a .json file.

I am still fairly new to working with data, connecting to a DB via PHP and such.

The db is in the same cpanel/host/server as where this file is found. Please let me know if I need to change, add or improve anything.

What I am trying to do, is read the file.json and then insert those entries into a remote DB that is on my server.

What I am looking for is how to insert these values into insert the data into a MYSQL, not print it on a page.

This question doesn't answer my question: How to extract and access data from JSON with PHP?

    <!DOCTYPE html>
<html>
<body>
<h1>Insert Data into DB</h1>
<?php
   
$username = "user";
$password = "pass";


// Create connection
$con = new PDO('mysql:host=host;dbname=DBNAME', $username, $password);
   

    //read the json file contents
    $jsondata = file_get_contents('http://path.to.file.com/file.json');
    
   
    
    //convert json object to php associative array
    $data = json_decode($jsondata, true);
    
    foreach ($data as $jsons)
     {
          $id = null;
    $fname = null;
    $lname = null;
    $email = null;
    $phone = null;
    $date = null;
    $state = null;
    
    foreach($jsons as $key => $value)
     {
         if($key == 'id') {
             $id = $value;
         }
         
          if($key == 'date_created') {
             $date = $value;
         }
         
          if($key == '1') {
             $email = $value;
         }
         
          if($key == '3.3') {
             $fname = $value;
         }
         
          if($key == '3.6') {
             $lname = $value;
         }
         
         if($key == '5') {
             $phone = $value;
         }
         
         if($key == '6') {
             $state = $value;
         }
    
     }
    //insert into mysql table
    $sql = "INSERT INTO contact(id, date, first, last, phone, email, state)
    VALUES('$id', '$date', '$fname', '$lname', '$phone', '$email', '$state')";
    if(!mysql_query($sql,$con))
    {
        die('Error : ' . mysql_error());
    }
    }
?>

</body>
</html>

here is an example of a JSON entry

{
    "total_count": 209,
    "entries": [
        {
            "id": "544537",
            "form_id": "2",
            "post_id": null,
            "date_created": "2022-10-21 17:26:18",
            "date_updated": "2022-10-21 17:26:18",
            "is_starred": "0",
            "is_read": "0",
            "ip": "68.126.222.136",
            "source_url": "/contact\",
            "user_agent": "Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/106.0.0.0 Safari\/537.36",
            "currency": "USD",
            "payment_status": null,
            "payment_date": null,
            "payment_amount": null,
            "payment_method": null,
            "transaction_id": null,
            "is_fulfilled": null,
            "created_by": null,
            "transaction_type": null,
            "status": "active",
            "1": "email@email.com",
            "2": "Contractor\/GC",
            "3.3": "first",
            "3.6": "last",
            "4": "Company",
            "5": "(111)132-4567",
            "6": "California",
            "7": "I am seeking for a bid to furnish and install",
            "is_approved": "3",
            "3.2": "",
            "3.4": "",
            "3.8": "",
            "8": "",
            "workflow_current_status_timestamp": false,
            "gpnf_entry_parent": false,
            "gpnf_entry_parent_form": false,
            "gpnf_entry_nested_form_field": false
        },


ValueError: Layer "model_1" expects 2 input(s), but it received 1 input tensors

I am trying to build a text classification model using the Bert pre train model, but I keep getting an error when I try to fit the model.

The error says

ValueError: Layer "model_1" expects 2 inputs but it received only 1 input tensor. 
Inputs received: \[\<tf.Tensor 'IteratorGetNext:0' shape=(None, 309) dtype=int32\>\]

I am also using TensorFlow and other Python libraries.

Here is my code:

import numpy as np
from data_helpers import load_data
from keras.models import Sequential
from keras.layers import Dense
from tensorflow.keras.layers import Embedding
from sklearn.model_selection import train_test_split
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.layers import Dropout,Flatten
from sklearn.metrics import classification_report 
from transformers import TFBertModel

import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from tensorflow.keras.layers import Embedding
# Data Preparation
print("Load data...")
x, y, vocabulary, vocabulary_inv = load_data()
np.save('data1-vocab.npy', vocabulary) 
sequence_length = x.shape[1]
X_train, X_test, y_train, y_test = train_test_split( x, y, test_size=0.2, random_state=42)

bert_model = TFBertModel.from_pretrained('bert-base-uncased')

def create_model(bert_model, max_len=sequence_length):
    
    ##params###
    opt = tf.keras.optimizers.Adam(learning_rate=1e-5, decay=1e-7)
    loss = tf.keras.losses.CategoricalCrossentropy()
    accuracy = tf.keras.metrics.CategoricalAccuracy()


    input_ids = tf.keras.Input(shape=(max_len,),dtype='int32')
    
    attention_masks = tf.keras.Input(shape=(max_len,),dtype='int32')
    
    embeddings = bert_model([input_ids,attention_masks])[1]
    
    output = tf.keras.layers.Dense(3, activation="softmax")(embeddings)
    
    model = tf.keras.models.Model(inputs = [input_ids,attention_masks], outputs = output)
    
    model.compile(opt, loss=loss, metrics=accuracy)
    
    
    return model


model = create_model(bert_model,sequence_length)
model.summary()

model.fit(X_train, y_train, epochs=32, batch_size=32,verbose=1)

I have changed the parameters of .fit() function but nothing works



Double showing a lot of zeros with printf

I wanted to write a little calculator, which I have already done by using cout and cin, and I used double instead of int to not only get integers as an output.

At school, I then saw that we're going to use printf() and scanf(). To practice the new commands, I wanted to rewrite my program, but when I run my program I only see a lot of zeros after the comma as an output. Does anybody know why?

I wanted to rebuild a calculator with double instead of int to not only get integers as a result.

This is the code:

#include <stdio.h>

using namespace std;

int main(){
    printf ("Taschenrechner\n\n");
    int zahl1, zahl2;
    char rechop;
    double erg;
    
    printf ("Gib die Rechnung ein: ");
    scanf ("%d", &zahl1);
    scanf ("%c", &rechop);
    scanf ("%d", &zahl2);
    
    if (rechop == '+'){
        erg = zahl1+ zahl2;
        printf ("Ergebnis: ");
        printf ("%f", erg);
    }
    else if (rechop == '-'){
        erg = zahl1 - zahl2;
        printf ("Ergebnis: ");
        printf ("%f", erg);
    }
    else if (rechop == '*'){
        erg = zahl1 * zahl2;
        printf ("Ergebnis: ");
        printf ("%f", erg);
    }
    else if (rechop == '/'){
        erg = zahl1 / zahl2;
        printf ("Ergebnis: ");
        printf ("%f", erg);
    }
    else {
        printf ("Keine gültige Rechenoperation!");
    }
    return 0;
}


How to extract configuration or ARM template for Azure App Insights User Flow

I have an Azure App Insights instance and a User Flow has been added to it and shared with others via the "Shared reports" folder. The User Flow was created manually using the Azure Portal UI. I would like to script the creation of this User Flow, but I am struggling to extract the configuration JSON from component.

I believe the User Flow is a microsoft.insights/favorites component. But when using Azure resource explorer I do not see such a component under the App Insights resource's microsoft.insights provider.

Does anyone know how to export the ARM template for an App Insights User Flow? Or, at least get at the JSON configuration for the User Flow?



Testing strategy for typescript lambda with database dependency

I have a typescript lambda that calls a database. I have written unit tests for the individual components for this (service that takes the db as a constructor argument for mocking etc, that all works fine). I am looking to write a test that calls the lambda handler itselff, however if I do this, I can no longer pass a db mock, therefore it tries to call the real database. Is there an established pattern for doing this other than spinning up a local database in docker/in memory database etc?

import { APIGatewayProxyCallback } from "aws-lambda";
import { myService } from "./service/myService";
import { myRepository } from "./repository/myRepository";

export const lambdaHandler = async (
    event: any,
    context: any,
    callback: APIGatewayProxyCallback,
): Promise<void> => {

    const service = new myService(new myRepository());
    const res = myService.execute(event); // Contains code for interacting with db

        callback(null, {
          statusCode: 200,
          body: JSON.stringify(res),
        });
};



How to setup 28 different local notifications that repeat every 28 days at 9AM [closed]

I am looking to setup a batch of 28 local notifications (1 each day) that will repeat again after 28 days until cancelled. These have to be set to be triggered at 9:00 AM everyday.

I have tried to set these notifications using the UNCalendarNotificationTrigger by giving datecomponents and adding a day by looping thru all the 28 and set repeating as true.

The downside is UNCalendarNotificationTrigger sets for the calendar month and not 28 days, so for day 29, 30 & 31 there are no notifications displayed. I need something that will trigger back to back.

This question has provided with some idea of how to use UNTimeIntervalNotificationTrigger, this works for one set of recurring notifications, but after that, they trigger at different intervals.

To make things easier to test, I have changed interval to 1 message every minute for 5 messages and then schedule 5 recurring messages using intervals. These recurring messages have repeat:true. After the 5 recurring messages, they don't follow the same pattern.

This code is in C# if you have a solution in swift, you are more than welcome to answer.

for (int i = 0; i < 5; i++)
{
    NSDateComponents DateComponents = new NSDateComponents();
    DateComponents.Minute = DateToSet.AddMinutes(i).Minute;
    DateComponents.Second = 0;
    UNMutableNotificationContent content = new UNMutableNotificationContent { Title = "Title " + i, Body = "Body " + i, Sound = UNNotificationSound.Default, CategoryIdentifier = "MyCategoryId" };
    UNCalendarNotificationTrigger trigger = 
    UNCalendarNotificationTrigger.CreateTrigger(DateComponents, false);
    UNNotificationRequest request = 
    UNNotificationRequest.FromIdentifier("MyCategoryId" + i.ToString(), content, trigger);
    UNUserNotificationCenter.Current.AddNotificationRequest(request, (err) =>
        {
            if (err != null)
            {
                Console.WriteLine("Error: {0}", err);
            }
            else
            {
                Console.WriteLine("Notification Scheduled: {0}", request);
             }
         });

     NSDate dd = DateToSet.AddMinutes(i).ToNSDate();
     var tt = NSCalendar.CurrentCalendar.DateByAddingUnit(NSCalendarUnit.Minute, 5, dd, NSCalendarOptions.None);
     //if DateToSet is later than current time, add that gap to the interval
     double AdditionalTime = 0;
     if (DateToSet.AddMinutes(i) > DateTime.Now)
     {
         AdditionalTime = (DateToSet.AddMinutes(i) - DateTime.Now).TotalSeconds;
     }
     var interval = tt.GetSecondsSince(dd) + AdditionalTime;

     UNMutableNotificationContent content1 = new UNMutableNotificationContent { Title = "Recurring Title", Body = "Recurring body " + i, Sound = UNNotificationSound.Default, CategoryIdentifier = "MyCategoryId" };
     var IntervalTrigger = UNTimeIntervalNotificationTrigger.CreateTrigger(interval, true);
     UNNotificationRequest RecurringReminderRequest = UNNotificationRequest.FromIdentifier("recurring_MyCategoryId" + i.ToString(), content1, IntervalTrigger);

UNUserNotificationCenter.Current.AddNotificationRequest(RecurringReminderRequest, (err) =>
                {
                    if (err != null)
                    {
                        Console.WriteLine("Error: {0}", err);
                    }
                    else
                    {
                        Console.WriteLine("RecurringReminderRequest Notification Scheduled: {0}", RecurringReminderRequest);
                    }
                });
            }

Any help is appreciated.



How to refer to the certificate from windows certificate store for MQ managed client in docker image?

I am trying to connect to MQ using MQ managed client which refers to the certificate from certificate store. I have created the docker image for the code and now wondering how to push the certificate along with it.

End goal is to deploy the image to the openshift pod.

Hashtable properties = new Hashtable();
properties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED);
properties.Add(MQC.HOST_NAME_PROPERTY, "XXXXXXXXXXXX");
properties.Add(MQC.PORT_PROPERTY, "XXXX");
properties.Add(MQC.CHANNEL_PROPERTY, "XXXXX");

//SSL
properties.Add(MQC.SSL_CERT_STORE_PROPERTY, "*USER");
properties.Add(MQC.SSL_CIPHER_SPEC_PROPERTY, "TLS_RSA_WITH_AES_256_CBC_SHA256");
properties.Add(MQC.SSL_PEER_NAME_PROPERTY,"XXXXXX");
properties.Add(MQC.SSL_RESET_COUNT_PROPERTY, 0);

queueManager = new MQQueueManager(QueueManagerName, properties);

The code is working fine directly without any issues but I am not aware of how to proceed with docker image.

UPDATE-1 Client Logs: SSL Server Certificate validation failed -

RemoteCertificateNameMismatch, RemoteCertificateChainErrors
0000016A 08:51:43.609321   54.1       ------------}  MQEncryptedSocket.ClientValidatingServerCertificate(Object,X509Certificate,X509Chain,SslPolicyErrors) rc=OK
0000016B 08:51:43.610594   54.1        System.Security.Authentication.AuthenticationException: The remote certificate was rejected by the provided RemoteCertificateValidationCallback.
   at System.Net.Security.SslStream.SendAuthResetSignal(ProtocolToken message, ExceptionDispatchInfo exception)
   at System.Net.Security.SslStream.CompleteHandshake(SslAuthenticationOptions sslAuthenticationOptions)
   at System.Net.Security.SslStream.ForceAuthenticationAsync[TIOAdapter](TIOAdapter adapter, Boolean receiveFirst, Byte[] reAuthenticationData, Boolean isApm)
   at System.Net.Security.SslStream.AuthenticateAsClient(SslClientAuthenticationOptions sslClientAuthenticationOptions)
   at System.Net.Security.SslStream.AuthenticateAsClient(String targetHost, X509CertificateCollection clientCertificates, SslProtocols enabledSslProtocols, Boolean checkCertificateRevocation)
   at IBM.WMQ.Nmqi.MQEncryptedSocket.MakeSecuredConnection()
0000016C 08:51:43.610655   54.1       -----------}  MQEncryptedSocket.MakeSecuredConnection() rc=OK
0000016D 08:51:43.610803   54.1        System.Security.Authentication.AuthenticationException: The remote certificate was rejected by the provided RemoteCertificateValidationCallback.
   at IBM.WMQ.Nmqi.MQEncryptedSocket.MakeSecuredConnection()
   at IBM.WMQ.Nmqi.MQEncryptedSocket..ctor(NmqiEnvironment env, MQTCPConnection conn, Socket socket, MQChannelDefinition mqcd, MQSSLConfigOptions sslConfigOptions)
   at IBM.WMQ.MQTCPConnection.ConnectSocket(String localAddr, String connectionName, Int32 options) 


2022-10-25

Alternative to ParallelForEach That can allow me to kill parallel processes immediately on Application Exit?

I am doing a simple console application that loads files from a database into a hashset. These files are then processed in a parallel foreach loop. This console application does launch a new Process object for each files it needs to process. So it opens new console windows with the application running. I am doing it this way because of logging issues I have if I run parsing from within the application where logs from different threads write into each other.

The issue is, when I do close the application, the parallel foreach loop still tries to process one more file before exiting. I want all tasks to stop immediately when I kill the application. Here is code excerpts:

My cancel is borrowed from: Capture console exit C#

Essentially the program performs some cleanup duties when it receives a cancel command such as CTRL+C or closing window with X button

The code I am trying to cancel is here:

static ConcurrentDictionary<int, Tuple<Tdx2KlarfParserProcInfo, string>> _currentProcessesConcurrentDict = new ConcurrentDictionary<int, Tuple<Tdx2KlarfParserProcInfo, string>>();
static bool exitSystem = false;

class Program
{
   
    private static bool _isFileLoadingDone;
    static ConcurrentDictionary<int, Tuple<Tdx2KlarfParserProcInfo, string>> _currentProcessesConcurrentDict = new ConcurrentDictionary<int, Tuple<Tdx2KlarfParserProcInfo, string>>();

    static void Main(string[] args)
    {
        try
        {
            if (args.Length == 0)
            {
                // Some boilerplate to react to close window event, CTRL-C, kill, etc
                ParseFilesUntilEmpty();
                while (!exitSystem)
                {
                    Thread.Sleep(500);
                }

            }

        }
    }

   
}

Which calls:

private static void LaunchFolderMode()
{
    //Some function launched from Task
    ParseFilesUntilEmpty();
}

And this calls:

private static void ParseFilesUntilEmpty()
{
    while (!_isFileLoadingDone)
    {
        ParseFiles();
    }
    
    ParseFiles();

}

Which calls:

private static void ParseFiles()
{
    filesToProcess = new HashSet<string>(){@"file1", "file2", "file3", "file4"} //I actuall get files from a db. this just for example
    //_fileStack = new ConcurrentStack<string>(filesToProcess);
    int parallelCount = 2
    Parallel.ForEach(filesToProcess, new ParallelOptions { MaxDegreeOfParallelism = parallelCount },
        tdxFile =>{
            ConfigureAndStartProcess(tdxFile);
        });
    
}

Which finally calls:

public static void ConfigureAndStartProcess(object fileName)
{
    string fileFullPath = fileName.ToString();
    Process proc = new Process();
    string fileFullPathArg1 = fileFullPath;
    string appName = @".\TDXXMLParser.exe";
    if (fileFullPathArg1.Contains(".gz"))
    {
        StartExe(appName, proc, fileFullPathArg1);  //I set up the arguments and launch the exes. And add the processes to _currentProcessesConcurrentDict
        proc.WaitForExit();
        _currentProcessesConcurrentDict.TryRemove(proc.Id, out Tuple<Tdx2KlarfParserProcInfo, string> procFileTypePair);
        proc.Dispose();
    }

}

The concurrent dictionary to monitor processes uses the following class in the tuple:

public class Tdx2KlarfParserProcInfo
{
    public int ProcId { get; set; }
    public List<long> MemoryAtIntervalList { get; set; } = new List<long>();
}

For the sake of how long these code excerpts are, I omitted the 'StartExe()' function. All it does is set up arguments and starts the process. Is there a better way parallel processing method I can use which will allow me to kill whatever files I am currently processing without immediately tryign to start a new process. Which the parallel.Foreach does?

I have tried killing it with Parallel State Stop method but it still tries to process one more file



Using roll-forward window to create a training set for ML based on multivariate time series

Based on the simplifed sample dataframe

import pandas as pd
import numpy as np
timestamps = pd.date_range(start='2017-01-01', end='2017-01-5', inclusive='left')
values = np.arange(0,len(timestamps))
df = pd.DataFrame({'A': values ,'B' : values*2},
                       index = timestamps )
print(df)

            A  B
2017-01-01  0  0
2017-01-02  1  2
2017-01-03  2  4
2017-01-04  3  6

I want to use a roll-forward window of size 2 with a stride of 1 to create a resulting dataframe like

     timestep_1  timestep_2  target_A  
0  A 0           1           2         
   B 0           2           2         
1  A 1           2           3
   B 2           4           3

To create a dataframe for the training of an ML model that predicts target values based on the values of n previous timesteps, where n=2, in the example above.

I.e., for each window step, a data item is created with the two values of A and B in this window and the A value immediately to the right of the window as target_A, where the index is the number of the data item.

My first idea was to use pandas

https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html

But that seems to only work in combination with aggregate functions such as sum, which is a completely different use case.

Any ideas on how to implement this rolling-window-based sampling approach?



How to apply 1/σ^2 weight matrix to find Weighted Least Squares Solution

I am trying to solve a Weighted Least Squares problem on Python (using numpy) but I am unsure on how to apply the following weight

enter image description here

on enter image description here

This is what i have done so far

import numpy as np
import matplotlib.pyplot as plt

X = np.random.rand(50) #Generate X values 
Y = 2 + 3*X + np.random.rand(50) #Y Values
plt.plot(X,Y,'o')
plt.xlabel('X')
plt.ylabel('Y')
W = ?? 
X_b = np.c_[np.ones((50,1)), X] #generate [1,x]
beta = np.linalg.inv(X_b.T.dot(W).dot(X_b)).dot(X_b.T).dot(W).dot(Y)


roboflow yoloV5 training same size problem

i am trying to train model via roboflow yolov5 notebook on colab. i always use pre-labeled roboflow data. and the problem is : the all final pt weight files has the same size (14.6 mb).

the big data or tiny data, there is no difference among sizes. i think it's ridiculous. can someone explain it?

colab notebook :https://colab.research.google.com/drive/1gDZ2xcTOgR39tGGs-EZ6i3RTs16wmzZQ



How to read /dev/input/mice more accurately?

I'm writing a program to read /dev/input/mice to get relative x,y positions and get the absolute distance the cursor moves. If I move my mouse at a normal speed starting at the center of the screen, the result is pretty accurate(960). However, if I move my mouse really fast, the absolute distance is not accurate.

#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/select.h>

int main() {
    signed char x, y;
    signed char buf[6];
    int fd;
    fd_set readfds;
    int screen_x = 1920;
    int screen_y = 1080;
    int x_total = 0, y_total = 0;

    // Create the file descriptor
    fd = open("/dev/input/mice", O_RDONLY);
    if (fd == -1) {
        printf("Error Opening /dev/input/mice\n");
        return 1;
    }

    printf("sizeof(buf): %d\n", sizeof(buf));

    // Loop that reads relative position in /device/input/mice
    while(1) {
        // Set the file descriptor
        FD_ZERO(&readfds);
        FD_SET(fd,&readfds);
        select(fd+1, &readfds, NULL, NULL, NULL);

        // Check if the fd is set successfully
        if(FD_ISSET(fd,&readfds)) { 
            // Check if reading fails
            if(read(fd, buf, sizeof(buf)) <= 0) {  
                continue;  
            }

            // Relative positions
            x = buf[1];
            y = buf[2];
            printf("x=%d, y=%d\n", x, y);
            // Assume that mouse starts at the center
            x_total += x;
            y_total += y;
            printf("x_total: %d; y_total: %d\n", x_total, y_total);
        }  
    }
    close(fd);
    return 0;
}

I use xdotool mousemove 960 540 to get the cursor at the center and then run the program. Output is something like:

x_total: 309; y_total:0
x= 3, y=2
x_total:312; y_total:2

So if I move the cursor from the center towards the right edge really fast, at the time the cursor reaches the right edge, x_total is going to be somewhere around 500 which should've been 960.



VS Code: avoid overaggressive parenthesis removal (in Scala / Metals)

I don't think this is a Scala- or Metals-specific issue, but that's where I'm seeing it.

VS Code aggressively removes parentheses, but not in a symmetric fashion. For instance, if I type the following:

Before image, two sets of parentheses

and I want to remove the inside parentheses, I would start by removing the inner closing parenthesis:

closing inner parenthesis removed

OK, the highlighting is already suggesting the problem: when I removed the inner parenthesis, it is the outer one that looks unmatched.

If I now go ahead and remove the inner opening parenthesis, both the opening and closing parentheses are lost:

mismatched with only opening outer parenthesis remaining


So maybe there is a right way to remove them that I am missing. Instead of removing the inside closing parenthesis, I will start by removing the inside opening parenthesis:

enter image description here

Unfortunately, in this case only the inner opening parenthesis is removed, so it is still mismatched.

Ugly Workaround

So for now, I either remove the inner closing parenthesis, then the inner opening parenthesis (which removes both, as shown), then I go back and add the lost closing parenthesis). Or, I remove the inner opening parenthesis, then go to the end of the line and remove that extra closing parenthesis.

Because I'll do this hundreds of times a day, the few wasted keystrokes really add up.

What did VS Code want us to do here? What is the right way to engage with its parenthesis autodeletion?



Inherit a custom user models fields from parent class to a child class between two different applications

Hello kings and queens!

I'm working on a project and got stuck on a (for me) complicated issue. I have one model (generalpage.models) where all the common info about the users is stored. In a different app (profilesettings), I have an app where all profile page related functions will be coded.

I tried to inherit the model fields from the User class in generalpage.models into profilesettings.models by simply writing UserProfile(User). When I did this, a empty forms was created in the admin panel. So basically, the information that was already stored generalpage.models were not inherited into the profilesettings.models, I created an entire new table in the database.

my questions are:

  1. Is it possible to create an abstract class for a custom user model?
  2. Is there a proper way to handle classes and create a method in profilesettings.models that fills the UserProfile form with the data already stored in database created by the User class?

Can someone please explain how the information can be passed from one application to another without creating a new empty form?

Filestructure:

enter image description here

Admin panel:

enter image description here

generalpage.models:

from random import choices
from secrets import choice
from unittest.util import _MAX_LENGTH
from django.db import models
from django.contrib.auth.models import AbstractBaseUser, User, PermissionsMixin
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from generalpage.managers import CustomUserManager
from django.conf import settings

from generalpage.managers import CustomUserManager
# Create your models here.

sex_choices = ( ("0", "Man"),
                ("1", "Kvinna"),
                ("2", "Trans")
               )

class User(AbstractBaseUser, PermissionsMixin):
    user = models.CharField(settings.AUTH_USER_MODEL, null=True, max_length=50)
    
    username = models.CharField(_("Användarnamn"), max_length=100, null=True, unique=True)
    age = models.IntegerField(_("Ålder"),null=True, blank=False)
    email = models.EmailField(_("E-mail"), unique=True, null=False)
    
    country = models.CharField(_("Land"),max_length=50, null=True, blank=True)
    county = models.CharField(_("Län"),max_length=50, null=True, blank=True)
    city = models.CharField(_("Stad"),max_length=50, null=True, blank=True)
    sex = models.CharField(_("Kön"), choices=sex_choices, null=True, blank=False, max_length=50)

    profile_picture = models.ImageField(_("Profilbild"),null=True, blank=True, default="avatar.svg", upload_to = "static/images/user_profile_pics/")
    
    is_staff = models.BooleanField(default=False)
    is_active = models.BooleanField(default=False)
    date_joined = models.DateTimeField(default=timezone.now)

    USERNAME_FIELD = 'username' # defines the unique identifier for the User model
    REQUIRED_FIELDS = ["email"] # A list of the field names that will be prompted for when creating a user via the createsuperuser management command
    
    objects = CustomUserManager()
    
    def __str__(self):
        return self.username

profilesettings.models:

from generalpage.models import User, UserInfo, Room, Message, Topic
from django.db import models

class UserProfile(User):
    pass
    
class Settings(UserInfo):
    pass



My models after @viktorblindh suggestion are:

for admin.py:

from django.contrib import admin    
from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
from profilesettings.models import UserProfile
from generalpage.models import User

class UserProfileInline(admin.StackedInline):
    model = UserProfile
    min_num = 1

class UserProfileAdmin(BaseUserAdmin):
    inlines = [UserProfileInline, ]


admin.site.register(User, UserProfileAdmin)

# admin.site.register(UserProfile)
# admin.site.register(Settings)

and for profilesettings.models:

from generalpage.models import User, UserInfo, Room, Message, Topic
from django.db import models
from django.conf import settings

class UserProfile(User):
    user = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
class Settings(UserInfo):
    pass

```

The solution suggested in:

https://stackoverflow.com/questions/42424955/how-to-exchange-data-between-apps-in-django-using-the-database

solved my issue. 


2022-10-24

Error while trying to connect with js code and mongodb

So I've been trying to figure this error and I couldn't find any solution everywhere. After sometime I know where the problem is but I don't know why:

    const savedUser = await newUser.save(); 

I believe that here is the problem because it keeps catching the consol.log("not working")

This is my code:

const router = require("express").Router();
const User = require("../models/User");

router.post("/register", async (req, res) => {
  const newUser = new User({
    username: req.body.username,
    email: req.body.email,
    password: req.body.password,
  });

  try {
    const savedUser = await newUser.save(); //I believe here is the problem
    res.status(201).json(savedUser);
  } catch (err) {
    console.log("not working");
    res.status(500).json(err);
  }
});
module.exports = router;

and I'm using postman I put the boy as raw and Json:

{
    "username":"lama",
    "email":"lama@gmail.com",
    "password":"12345"

}

and for some reason I get this as a result:

{
    "ok": 0,
    "code": 8000,
    "codeName": "AtlasError"
}

Thank you in advance!



ROI Boxes specified on specific frames using 3D data in Pyqtgraph app

I'm working with 3D data in my application, which I display using the ImageView class. The code below (copied from GeeksForGeeks is a very close example to it, but I have a custom ROI button that displays a custom ROI box in the display.

My question: is there a way I can define the ROI Box to exist on only specified frames? For example: I want the ROI box to exist from frames 3 to 10 and 20 to 25.

Thanks in advance!

# importing Qt widgets
from PyQt5.QtWidgets import *
 
# importing system
import sys
 
# importing numpy as np
import numpy as np
 
# importing pyqtgraph as pg
import pyqtgraph as pg
from PyQt5.QtGui import *
from PyQt5.QtCore import *
 
 
# Image View class
class ImageView(pg.ImageView):
 
    # constructor which inherit original
    # ImageView
    def __init__(self, *args, **kwargs):
        pg.ImageView.__init__(self, *args, **kwargs)
 
 
class Window(QMainWindow):
 
    def __init__(self):
        super().__init__()
 
        # setting title
        self.setWindowTitle("PyQtGraph")
 
        # setting geometry
        self.setGeometry(100, 100, 600, 500)
 
        # icon
        icon = QIcon("skin.png")
 
        # setting icon to the window
        self.setWindowIcon(icon)
 
        # calling method
        self.UiComponents()
 
        # showing all the widgets
        self.show()
 
        # setting fixed size of window
        self.setFixedSize(QSize(600, 500))
 
    # method for components
    def UiComponents(self):
 
        # creating a widget object
        widget = QWidget()
 
        # creating a label
        label = QLabel("Geeksforgeeks Image View")
 
        # setting minimum width
        label.setMinimumWidth(130)
 
        # making label do word wrap
        label.setWordWrap(True)
 
        # setting configuration options
        pg.setConfigOptions(antialias=True)
 
        # creating image view object
        imv = ImageView()
 
        # Create random 3D data set with noisy signals
        img = pg.gaussianFilter(np.random.normal(
            size=(200, 200)), (5, 5)) * 20 + 100
 
        # setting new axis to image
        img = img[np.newaxis, :, :]
 
        # decay data
        decay = np.exp(-np.linspace(0, 0.3, 100))[:, np.newaxis, np.newaxis]
 
        # random data
        data = np.random.normal(size=(100, 200, 200))
        data += img * decay
        data += 2
 
        # adding time-varying signal
        sig = np.zeros(data.shape[0])
        sig[30:] += np.exp(-np.linspace(1, 10, 70))
        sig[40:] += np.exp(-np.linspace(1, 10, 60))
        sig[70:] += np.exp(-np.linspace(1, 10, 30))
 
        sig = sig[:, np.newaxis, np.newaxis] * 3
        data[:, 50:60, 30:40] += sig
 
        # setting image to image view
        # Displaying the data and assign each frame a time value from 1.0 to 3.0
        imv.setImage(data, xvals=np.linspace(1., 3., data.shape[0]))
 
        # Set a custom color map
        colors = [
            (0, 0, 0),
            (4, 5, 61),
            (84, 42, 55),
            (15, 87, 60),
            (208, 17, 141),
            (255, 255, 255)
        ]
 
        # color map
        cmap = pg.ColorMap(pos=np.linspace(0.0, 1.0, 6), color=colors)
 
        # setting color map to the image view
        imv.setColorMap(cmap)
 
        # Creating a grid layout
        layout = QGridLayout()
 
        # minimum width value of the label
        label.setFixedWidth(130)
 
        # setting this layout to the widget
        widget.setLayout(layout)
 
        # adding label in the layout
        layout.addWidget(label, 1, 0)
 
        # plot window goes on right side, spanning 3 rows
        layout.addWidget(imv, 0, 1, 3, 1)
 
        # setting this widget as central widget of the main window
        self.setCentralWidget(widget)
 
 
# create pyqt5 app
App = QApplication(sys.argv)
 
# create the instance of our Window
window = Window()
 
# start the app
sys.exit(App.exec())