2023-02-28

Java 18: Vector API

Vector API is a new feature introduced in Java 18 that provides a set of vectorized operations that can be used to accelerate mathematical computations. Vector API enables developers to write code that can take advantage of the latest hardware, including CPUs with vector units and GPUs.


Vector API provides a set of classes and interfaces that can be used to write code that performs vectorized operations on arrays of numeric data. The API supports operations such as addition, multiplication, and division, as well as more complex operations such as dot products, norms, and reductions. The API also includes support for complex numbers and floating-point operations.


Here's an example of how Vector API can be used to accelerate a simple mathematical operation:


import jdk.incubator.vector.*;


public class VectorExample {

    public static void main(String[] args) {

        float[] a = {1.0f, 2.0f, 3.0f, 4.0f};

        float[] b = {5.0f, 6.0f, 7.0f, 8.0f};

        float[] result = new float[4];

        VectorSpecies<Float> species = FloatVector.SPECIES_128;

        int i = 0;

        for (; i <= a.length - species.length(); i += species.length()) {

            FloatVector av = FloatVector.fromArray(species, a, i);

            FloatVector bv = FloatVector.fromArray(species, b, i);

            FloatVector cv = av.add(bv);

            cv.intoArray(result, i);

        }

        for (; i < a.length; i++) {

            result[i] = a[i] + b[i];

        }

        System.out.println(Arrays.toString(result));

    }

}

In this example, the code performs a vectorized addition of two arrays of floating-point numbers using Vector API. The SPECIES_128 constant is used to specify the vector size (128 bits), and the code uses the FloatVector class to perform the vectorized addition. The intoArray method is used to write the results back to an output array.


Vector API can significantly improve the performance of mathematical computations, especially on modern hardware with vector units. However, it's important to note that not all hardware supports vector units, and the performance gains may vary depending on the size and type of data being processed. Additionally, the Vector API is still an experimental feature and subject to change in future Java releases.


Overall, Vector API is a powerful addition to Java 18 that can enable developers to write code that takes full advantage of modern hardware and delivers high-performance mathematical computations.

Pattern Matching for switch Statements

Pattern Matching for switch Statements is a new feature introduced in Java 18 that allows developers to use patterns as case labels in switch statements. This feature was first introduced as a preview feature in Java 14 and was further improved in Java 15 and Java 17 before being made a permanent feature in Java 18.

Prior to Java 18, switch statements could only use primitive types, enums, and strings as case labels. With pattern matching, developers can now use patterns, which are more flexible and powerful than simple literals. Patterns allow developers to match against more complex conditions, such as types, values, and structures.


Here's an example of how pattern matching works:

public String getAnimalSound(Animal animal) {

    String sound = switch (animal) {

        case Cat c -> "Meow";

        case Dog d -> "Woof";

        case Lion l -> "Roar";

        default -> throw new IllegalArgumentException("Unknown animal: " + animal);

    };

    return sound;

}

In this example, the switch statement uses pattern matching to match against different types of animals. The cases use the -> operator to specify the pattern to match against and the corresponding code to execute if the pattern matches. The default case is used to handle any animals that do not match any of the previous cases.


Here are some of the patterns that can be used in switch statements:


Type Patterns: match against a specific type or its subtypes.

Value Patterns: match against specific values, ranges, or sets of values.

Object Patterns: match against the state of an object, such as its fields or methods.

Binding Patterns: introduce a new variable and match against its value.

Pattern Matching for switch Statements can simplify code and make it more expressive and readable. It can also reduce the need for if-else statements and other conditional logic. However, it's important to use pattern matching judiciously and not overuse it, as it can make code more complex if used improperly.

Can I install MySQL ODBC Driver 8 on Amazon Linux 2?

I'm running into an issue when connecting to a MySQL database on Amazon Linux 2 using pyodbc and the mysql-connector-odbc driver. As shown below:

pyodbc.OperationalError: ('08004', "[08004] [unixODBC][MySQL][ODBC 5.2(w) Driver]Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib64/mysql/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory (2059) (SQLDriverConnect)")

I tried upgrading my Amazon Linux version to 2.0.20230207.0, but got a similar error:

pyodbc.OperationalError: ('08004', "[08004] [unixODBC][MySQL][ODBC 5.2(w) Driver]Authentication plugin 'sha256_password' cannot be loaded: /usr/lib64/mysql/plugin/sha256_password.so: cannot open shared object file: No such file or directory (2059) (SQLDriverConnect)")

My understanding is this is because the version of MySQL server is using a newer version than what my ODBC driver supports. I would love to upgrade the ODBC driver to validate this is the cause, but it seems that Amazon Linux only supports mysql-connector-odbc-5.2.5 yet the latest version available is 8.0.32. Is there any way to install the newer version of the MySQL ODBC driver manually, or is it just not possible with Amazon Linux 2?

Thanks in advance!



How to send two different Outlook emails from same Excel sheet when 7 days from due date

I am trying to automatically notify my coworkers when their work is seven days away from being due. There are two tables on the same sheet (one sheet per coworker).

How can I send an email based on two different tables on the same sheet?

For example, Bill has treatment plans that will be due. He also has assessments that will be due. The treatment plans are in one table and the assessments are in another table on the same sheet (Sheet2 Bill).

I want an email sent to Bill when a treatment plan is seven days away from the due date.
I want a different email sent to Bill when an assessment is seven days away from the due date.

I am not seeing any error messages, but I am also not receiving any emails.

Client name is the B column.

This is a sample line from my table: enter image description here

Sub email()
Dim r As Range, cell As Range
Dim ws As Worksheet
Dim Mail_Object As Object, Mail_Single As Object
Dim Email_Subject As String, Email_Send_From As String, Email_Send_To As String, _
    Email_Cc As String, Email_Bcc As String, Email_Body As String

Set ws = ThisWorkbook.Worksheets("Sheet2 (Bill)")
Set r = ws.Range("F5:F12")
Set Mail_Object = CreateObject("Outlook.Application")

For Each cell In r
    If cell.Value <= (Date + 7) And cell.Value >= (Date) Then

        Email_Subject = "Treatment plan is due soon"
        Email_Send_From = "blahblah@blahblah.blah"
        Email_Send_To = "blahblah@blahblah.blah"
        Email_Body = "This is an automated reminder that you have a treatment plan due within the next 7 days."

        On Error GoTo debugs
        Set Mail_Single = Mail_Object.CreateItem(0)

        With Mail_Single
            .Subject = Email_Subject
            .To = Email_Send_To
            .Body = Email_Body
            .send
        End With

    End If
    
Next cell
Sub email()
Dim r As Range, cell As Range
Dim ws As Worksheet
Dim Mail_Object As Object, Mail_Single As Object
Dim Email_Subject As String, Email_Send_From As String, Email_Send_To As String, _
    Email_Cc As String, Email_Bcc As String, Email_Body As String

Set ws = ThisWorkbook.Worksheets("Sheet2 (Bill)")
Set r = ws.Range("F19:F26")
Set Mail_Object = CreateObject("Outlook.Application")

For Each cell In r
    If cell.Value <= (Date + 7) And cell.Value >= (Date) Then

        Email_Subject = "Treatment plan is due soon"
        Email_Send_From = "blahblah@blahblah.blah"
        Email_Send_To = "blahblah@blahblah.blah"
        Email_Body = "This is an automated reminder that you have a treatment plan due within the next 7 days."

        On Error GoTo debugs
        Set Mail_Single = Mail_Object.CreateItem(0)

        With Mail_Single
            .Subject = Email_Subject
            .To = Email_Send_To
            .Body = Email_Body
            .send
        End With
        End With

    End If

debugs: If Err.Description <> "" Then MsgBox Err.Description

End Sub



Problem with Kafka deserialization Python

I new in Kafka and Python but I should create consumer:)

I created simple consumer and got result, but the data in Kafka is store in Avro that's why I need to make deserialization.

I tried variant like this

import os
from confluent_kafka import Consumer
from confluent_kafka.serialization import SerializationContext, MessageField
from confluent_kafka.schema_registry import SchemaRegistryClient
from confluent_kafka.schema_registry.avro import AvroDeserializer


if __name__ == "__main__":

    class test(object):

        def __init__(self,test_id=None,dep=None,descr=None,stor_key=None,pos=None,time_dt=None):
            self.test_id = test_id
            self.dep = dep
            self.descr = descr
            self.stor_key = stor_key
            self.pos = pos
            self.time_dt = time_dt


def dict_to_klf(obj, ctx):

   if obj is None:
        return None

   return test(test_id=obj['test_id'],
                    dep=obj['dep'],
                    descr=obj['descr'],
                    stor_key=obj['stor_key'],
                    pos=obj['pos'],
                    time_dt=obj['time_dt'])

schema = "descr.avsc"

path = os.path.realpath(os.path.dirname(__file__))
with open(f"{path}\\{schema}") as f:
        schema_str = f.read()
        
sr_conf = {'url': ':8081'}
schema_registry_client = SchemaRegistryClient(sr_conf)     


avro_deserializer = AvroDeserializer(schema_registry_client,
                                         schema_str,
                                         dict_to_klf)   

consumer_config = {
        "bootstrap.servers": "com:9092",
        "group.id": "descr_events",
        "auto.offset.reset": "earliest"
                  }

consumer = Consumer(consumer_config)

consumer.subscribe(['descr'])

while True:
        msg = consumer.poll(1)
        if msg is None:
          continue

        user = avro_deserializer(msg.value(), SerializationContext(msg.topic(), MessageField.VALUE))

        print(msg.topic())
        print("-------------------------")

And got error

fastavro._schema_common.SchemaParseException: Default value <undefined> must match schema type: long

SCHEMA_PATH = "descr.avsc" looks like

{
    "type": "record",
    "name": "klf",
    "namespace": "test_ns",
    "fields": [ 
        {
            "name": "descr",
            "type": "string",
            "default": "undefined"
        },
        {
            "name": "test_id",
            "type": "long",
            "default": "undefined"
        },
        {
            "name": "dep",
            "type": "string",
            "default": "undefined"
        },
        {
            "name": "stor_key",
            "type": "string",
            "default": "undefined"
        },
        {
            "name": "time_dt",
            "type": "string",
            "default": "undefined"
        },
        {
            "name": "pos",
            "type": "string",
            "default": "undefined"
        }
    ]
    
}

What will I need to change to get the result with data?



header=None not working as intended in Pandas

When loading a CSV first line is being turned into the header, even when using the command header=None

1st 6 Lines of CSV

"L","65496","56","17","2","1","2","2","2","1","2","2","2","2","2","202210024616","10/07/2022","11/04/2022","6401 30TH AVE S","","Seattle","98108","","6401 30TH AVE S","","Seattle","WA","98108","","7233841","206","","","24","11","3","0.00","1","","2","2","2","2","2","1","2","2","2","2","2","2","2","1805","911.00","0","0.00","2","2","584","0","44","200001263785","584","","","","584","","","","","","","","","","","","","","","","","","","","","","","","","","","","0","11/06/2022","0","EAP","","","","","","","","21-3260A","","","","","","","","","","","","","","","","","","","","","","","","","","",""
"L","65082","56","17","1","1","2","2","2","1","2","2","2","2","2","202210024429","10/04/2022","10/13/2022","6317 1st ave ne","","Seattle","98115","","6317 1st ave ne","","Seattle","WA","98115","","3563485","206","","","8","11","1","0.00","1","","2","2","2","2","2","1","2","2","2","2","2","2","2","1031","542.00","0","0.00","2","2","391","0","44","220012687236","391","","","","391","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.00","11/06/2022","0","EAP","","","","","","","","21-3260A","","","","","","","","","","","","","","","","","","","","","","","","","","",""
"L","65151","56","17","1","1","2","2","2","1","2","2","2","2","2","202210024435","10/04/2022","10/20/2022","1215 N 45th St APT 425","","Seattle","98103","","1215 N 45th St APT 425","","Seattle","WA","98103","","4126320","206","","","6","11","3","70.00","3","","1","2","2","1","2","2","2","2","2","2","2","2","2","417","780.00","0","0.00","1","2","394","0","57","1309905172","394","","","","394","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.00","11/06/2022","0","EAP","","","","","","","","21-3260A","","","","","","","","","","","","","","","","","","","","","","","","","","",""
"L","66363","56","17","1","1","2","2","2","1","2","2","2","2","2","202211026173","11/09/2022","12/07/2022","12036 3rd Ave NW","","Seattle","98177","","12036 3rd Ave NW","","Seattle","WA","98177","","2980334","206","","","13","11","1","1562.00","1","","2","2","2","2","2","2","2","2","2","2","2","2","2","0","1212.00","0","0.00","2","2","1000","0","44","200023063106","1000","","","","1000","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.00","12/28/2022","0","EAP","","","","","","","","21-3260A","","","","","","","","","","","","","","","","","","","","","","","","","","",""
"L","65918","56","17","1","1","2","2","2","1","2","2","2","2","2","202211026510","11/16/2022","11/17/2022","955 10th Ave E","","Seattle","98102","","955 10th Ave E","","Seattle","WA","98102","","3545595","206","","","0","11","3","0.00","1","","2","2","2","2","2","2","2","2","2","2","2","2","2","0","2038.00","0","0.00","2","2","1000","0","44","220008470498","510","57","1541910000","490.00","1000","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.00","11/30/2022","0","EAP","","","","","","","","21-3260A","21-3260A","","","","","","","","","","","","","","","","","","","","","","","","","",""
"L","65252","56","17","1","1","2","2","2","1","2","2","2","2","2","202210024484","10/05/2022","10/25/2022","348 19th Ave","","Seattle","98122","","348 19th Ave","","Seattle","WA","98122","","4500209","206","","","17","11","1","1.00","1","","1","2","2","2","2","1","2","2","1","2","2","2","2","1239","131.00","0","130.88","2","2","200","0","57","3234140000","102","44","200012073868","98.00","200","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.00","11/06/2022","0","EAP","","","","","","","","21-3260A","21-3260A","","","","","","","","","","","","","","","","","","","","","","","","","",""

I typed the command

df_24 = pd.read_csv(r'Uploads/HHD_Formatted_20230224.csv', header=None)

and got

   L  65496  56  17  2  1  2.1  2.2  2.3  1.1  2.4  2.5  2.6  2.7  2.8  \
0  L  65082  56  17  1  1    2    2    2    1    2    2    2    2    2   
1  L  65151  56  17  1  1    2    2    2    1    2    2    2    2    2   
2  L  66363  56  17  1  1    2    2    2    1    2    2    2    2    2   
3  L  65918  56  17  1  1    2    2    2    1    2    2    2    2    2   
4  L  65252  56  17  1  1    2    2    2    1    2    2    2    2    2   

---Solved: I was using

head(df)

to view the csv, instead of using

display(df)

Which shows the dataset with the correct index

    0   1   2   3   4   5   6   7   8   9   ... 123 124 125 126 127 128 129 130 131 132
0   L   65496   56  17  2   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1   L   65082   56  17  1   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2   L   65151   56  17  1   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3   L   66363   56  17  1   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4   L   65918   56  17  1   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2209    L   68217   56  17  1   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2210    L   68222   56  17  2   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2211    L   68165   56  17  1   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2212    L   68268   56  17  1   1   2   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2213    L   68254   56  17  1   1   1   2   2   1   ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN


MediaWiki Display Historic Date/Time In User's Local Time

Presently, I've installed MediaWiki on a wiki which includes a section with a timeline of modern historic events in basically every article. It's important to display the precise time of day of these global historic events in a meaningful way and I anticipated this would be easy to accomplish.

Presently, I am converting timestamps of events into my current local time and typing them as plain text in the article when I make edits. This works great for me, but most other users of the wiki will be in different timezones and I'd rather not arbitrarily declare my local timezone as the "ultimate source of global truth" and force everyone else around the world to convert all times to my local timezone.

Similarly, I could convert all the times to UTC but, in fact, the majority of users I anticipate won't be from Europe. I could use Eastern time, but honestly, many of the timelines concern events that have happened in Asia, Australia, and other places globally. It's very frustrating for me to try to mentally convert times that are in UTC all the time if I'm trying to understand an series of historical events, and I don't want to have to put anyone else through that anguish.

What I'd like is a simple markup to automatically display a fixed point in time (ideally accepted as either a timestamp or date including a timezone) in the user's local time. That is, (1) the timezone of their browser if it can be determined, (2) the timezone they set in "Time zone" under "Appearance", or (3) the default timezone of the wiki only as a fallback. The best solution would be one that appears on the Visual Editor and allows the user to easily insert a date/time without having to necessarily go to the source editor level. It would be a really cool feature if above the timeline I could even stick a drop down so the user could see the events represented in different timezones as they are playing out globally, but I'm very happy with just the basics.

Here are the various ways I've tried to solve this problem so far that haven't worked:

(1) Dynamic dates or Date formatting and linking Apparently this was removed in MediaWiki 1.21. I'm not sure that it is exactly what I'm looking for anyway.

(2) Magic words. All of these apparently display in UTC only. They also only display the current time and not past historic points in time.

(3) MediaWiki's #time function in the ParserFunctions extension. ParserFunctions actually came pre-installed and just had to be enabled. Unfortunately, all the times are still displayed in UTC no matter what, subjecting all users who don't live in a thin strip of Europe and Africa (including me) to "endless anguish" as above.

(4) MediaWiki's #timel function in the ParserFunctions extension. I got excited briefly before I realized the dates are still only displayed in a single timezone. I can change the timezone using $wgLocaltimezone in LocalSettings.php but that's only changing it globally for the whole wiki. If a user views the wiki or logs in from Europe or Asia or Australia, all the times will not make much sense to them and they will be subjected to "endless anguish". Another less severe but still notable issue is that these time instructions appear to only be able to be inserted by manually editing the source code, and there isn't any option in the Visual Editor.

(5) The StringFunctions extension also has some time processing capabilities. Unfortunately it's just an older version of the ParserFunctions extension and obsolete.

(6) I found something called Semantic MediaWiki which is apparently an extension that supports a "Type_Date" structure, which "is used for data values that represent points in time". However, even if I wanted to install that entire extension just for this simple task, it unfortunately says "[u]ser settings are not taken into account for displaying dates."

(7) Date formatting and linking or Template:Date. I can't find any articles here on Stack Overflow that will help me either. At this point, I'm just flailing around basically.

Is there any way to get functionality like this to work on MediaWiki or is the only solution to develop some sort of simple plug-in (and how would I have to do that)? This doesn't seem like it should be that complicated of a thing to do/accommodate and I'm hoping there's a simpler solution I've overlooked. Thanks everyone for your help or ideas.



2023-02-27

How to rename identical values in a column within R?

Say a data set:

a <- c(101,101,102,102,103,103)
b <- c("M","M","P","P","M","M")
dt <- as.data.frame(cbind(a,b))
dt

    a b
1 101 M
2 101 M
3 102 P
4 102 P
5 103 M
6 103 M

Column a is subject_ID, and column b is subject_name. I want to uniquely rename subject ID 101 to M1, and 103 to M2.

Is there a way to do this by indexing?

This does not work.

dt.try1 <- gsub("M","M1",dt[1:2,c(2)])
dt.try1
[1] "M1" "M1"

This is what would be ideal result:

    a  b
1 101  M
2 101  M
3 102  P
4 102  P
5 103 M2
6 103 M2

Why does not this work?



How does preprocessor resolve the path of linux kernel' head file?

I am new to C programming language, and currently trying to add new syscall to Linux kernel by re-compiling the kernel.

When I read the source code, I found it very difficult to locate the path of head file.

For example:

#include <linux/kernel.h>   /* for printk */
#include <linux/syscalls.h> /* for SYSCALL_DEFINE1 macro */

SYSCALL_DEFINE1(printmsg, int, i)
{
    printk(KERN_DEBUG "Hello! This is a msg %d", i);
    return 1;
}

This is one of my customized system calls. I edited the MakeFile in kernel/ folder and syscall_64.tbl . After re-compiling, it worked. No compiling error.

However, my problem now is how the preprocessor resolves the path like <linux/syscalls.h>. Based on my previous understanding, the preprocessor would go to the folder /usr/include for searching. Certainly, there is no linux/syscalls.h within /usr/include.

I found linux/syscalls.h is actually in linux's project include/linux/ .

I was wondering what makes the preprocessor search within the project just like include " " when it is actually using <>.

I was wondering if it is because of some part of MakeFile? If so, which command in MakeFile could make this happen?

Thank you for any help, and please let me know if I misunderstood something.



how can I keep the keyboard opened with @FocusState with SwiftUI without a bounce?

I am trying to develop a view where the user must input his name and surname. these two textfields have a FocusState. Everything is working well less a little bounce when the focus changes his goal. bounce app

I do not know what is happening in my code for this issue. this is my code:

struct OnboardingViewPart2: View {

enum Field: Hashable{
    case name
    case surname
}

@State var displayName         = ""
@State var showImagePicker     = false
@State var isSomePhotoSelected = false
@State var displaySurname      = ""
@FocusState var focusedField : Field?


// For image picker
@State var imageSelected: UIImage = UIImage(systemName: "person.fill")!
@State var sourceType: UIImagePickerController.SourceType = .photoLibrary

var body: some View {
    
    VStack(alignment: .center, spacing: 20, content: {
        

//other code

        // MARK: Textfield group
        Group{
            
            TextField("Add your name here...", text: $displayName)
                .padding()
                .frame(height: 60)
                .frame(maxWidth: .infinity)
                .background(Color.MyTheme.beige)
                .cornerRadius(12)
                .font(.headline)
                .autocapitalization(.sentences)
                .padding(.horizontal)
                .focused($focusedField, equals: .name)
            
            TextField("Add your surname here...", text: $displaySurname)
                .padding()
                .frame(height: 60)
                .frame(maxWidth: .infinity)
                .background(Color.MyTheme.beige)
                .cornerRadius(12)
                .font(.headline)
                .autocapitalization(.sentences)
                .padding(.horizontal)
                .focused($focusedField, equals: .surname)
        }
        .onSubmit {focusedField = focusedField == .name ? .surname : nil}
        
        //other code
        
    }) // VStack
    .frame(maxWidth: .infinity, maxHeight: .infinity)
    .background(Color.MyTheme.purple)
    //.edgesIgnoringSafeArea(.all)
    .sheet(isPresented: $showImagePicker, content: {
        ImagePicker(imageSelected: $imageSelected, sourceType: $sourceType, isSomeImageSelected: $isSomePhotoSelected)
    }) // Picker
    
    
}

}

// Updated!!

after updating the code with the proposed solution this is the new result: new result



Annotate plot with ordered pair of cartesian coordinates via Python and MatPlotLib library

Looking for a solution to properly annotate a subplot with an ordered pair of cartesian coordinates.

My figure is a bar graph of total product quantities with a line graph of the average price for the given products. For additional reference, please see the figure at the end of this article: https://medium.com/swlh/product-sales-analysis-using-python-863b29026957

Please note, I have two vertical axes where:

  • y1 = total quantity of a given product
  • y2 = average price of a given product
  • y1 & y2 share an x-axis of product categories

Rather than plotting labels "(x, y)", my goal is to plot labels for (y1, y2), i.e. "(qty, price)".

The current error that I am running into is that the list elements in my variable, label, are not recognized as "subscriptable objects". I am under the impression that the solution is to convert each element of my list into a string, but I am not positive.

df =

Products Quantity Price
Product1 10 100.00
Product2 15 200.00
Product3 20 150.00
Product2 30 200.00
Product3 50 150.00

Attempt

quantity = df.groupby("Products")["Quantity"].sum()
price = df.groupby("Products")["Price"].mean()

fig, ax1 = plt.subplots()
ax2 = ax1.twinx()

ax1.bar(Products, quantity, color='.8', alpha =.8)
ax2.plot(Products, price, 'bo-')

ax1.set_xlabel('', fontweight='bold')
ax1.set_ylabel('Quantity', color = 'k', fontweight='bold')
ax2.set_ylabel('Price $', color = 'b', fontweight='bold')
ax1.set_xticklabels(Products, rotation=45, size = 8)

y1 = [i for i in quantity]
y2 = [j for j in price]

label = []
for x, y in zip(y1,y2):
    label.append(f"({x:.2f},{y:.2f})")

for i, label in enumerate(labels):
    plt.annotate(label, xy=(x[i], y[i]), xytext=(5, 5),
    textcoords='offset points', ha='left', va='bottom')
plt.show()

Trouble Area

#can't find a method to convert my list elements from float to string values *inline* with label.append()
label = []
for x, y in zip(y1,y2):
    label.append(f"({x:.2f},{y:.2f})")

I feel like I am looking for a solution similar to either:

  1. https://www.tutorialspoint.com/how-to-annotate-several-points-with-one-text-in-matplotlib
  2. https://queirozf.com/entries/add-labels-and-text-to-matplotlib-plots-annotation-examples


Iterate through an Array of Objects within another Array of Objects JavaScript [closed]

I have created an Array of Objects. Within this Array of Objects, there is another Array of Objects.

let firstArray = [
    {element: "This is a string"},
    {element: "This is a string"},
    {element: "This is a string"},
    {element: "This is a string"},
    {
        element: "This is a string",
        secondArray: [
                     {
                         otherElements: "This is a different element"
                         userRating: 5
                     },
                     {
                         otherElements: "This is a different element"
                         userRating: 5
                     },
                     {
                         otherElements: "This is a different element"
                         userRating: 5
                     },
                     {
                         otherElements: "This is a different element"
                         userRating: 5
                     },
                     {
                         otherElements: "This is a different element"
                         userRating: 5
                     },
                 ]
                     
    },
];

I want to loop through all of the objects in the Array named 'secondArray'. The program should then add all of the 'userRating' elements together and log the answer to the console.

The code i have tried did not work correctly:

for (let i of firstArray){
    console.log(element);
    for (let j of firstArray.secondArray) {
        console.log(j.userRating);
    }
}


In ruby, optparse raises error when filename contains certain characters

I'm using optparse in a ruby program (ruby 2.7.1p83) under Linux. If any of the command-line arguments are filenames with "special" characters in them, the parse! method fails with this error:

invalid byte sequence in UTF-8

This is the code which fails ...

parser = OptionParser.new {
  |opts|
  ... etc. ...
}
parser.parse! # error occurs here

I know about the scrub method and other ways to do encoding in ruby. However, the place where the error occurs is in a library routine (OptionParser#parse!), and I have no control over how this library routine deals with strings.

I could pre-process the command-line arguments and replace the special characters in these arguments with an acceptable encoding, but then, in the case where the argument is a file name, I will be unable to open that file later in the program, because the filename I have accepted into the program will have been altered from the file's original name.

I could do something complicated like pre-traversing the arguments, building a hashmap where the key is the encoded argument and the value is the original argument, changing the ARGV values to the encoded values, parsing the encoded arguments using OptionParser, and then going through the resulting arguments after OptionParser completes and using the hashmap to in a procedure which replaces the encoded arguments with their original values ... and then continuing with the program.

But I'm hoping that there would be a much simpler way to solve this problem in ruby.

Thank you in advance for any ideas or suggestions.

UPDATE: Here is more detailed info ...

I wrote the following minimal program called rtest.rb in order to test this:

#!/usr/bin/env run-ruby                                                                                                                               
# -*- ruby -*-                                                                                                                                        

require 'optparse'

parser = OptionParser.new {
}
parser.parse!

Process.exit(0)

I ran it as follows, with the only files present in the current directory being rtest.rb itself, and another file having this name: Äfoo ...

export LC_TYPE='en_us.UTF-8'
export LC_COLLATE='en_us.UTF-8'
./rtest.rb *

It generated the following error and stack trace ...

Traceback (most recent call last):
    7: from /home/hippo/bin/rtest.rb:8:in `<main>'
    6: from /opt/rubies/ruby-2.7.1/lib/ruby/2.7.0/optparse.rb:1691:in `parse!'
    5: from /opt/rubies/ruby-2.7.1/lib/ruby/2.7.0/optparse.rb:1666:in `permute!'
    4: from /opt/rubies/ruby-2.7.1/lib/ruby/2.7.0/optparse.rb:1569:in `order!'
    3: from /opt/rubies/ruby-2.7.1/lib/ruby/2.7.0/optparse.rb:1575:in `parse_in_order'
    2: from /opt/rubies/ruby-2.7.1/lib/ruby/2.7.0/optparse.rb:1575:in `catch'
    1: from /opt/rubies/ruby-2.7.1/lib/ruby/2.7.0/optparse.rb:1579:in `block in parse_in_order'
/opt/rubies/ruby-2.7.1/lib/ruby/2.7.0/optparse.rb:1579:in `===': invalid byte sequence in UTF-8 (ArgumentError)

Here is what appears in the pertinent section of the file /opt/rubies/ruby-2.7.1/lib/ruby/2.7.0/optparse.rb . See line 1579...

 1572   def parse_in_order(argv = default_argv, setter = nil, &nonopt)  # :nodoc:                                                                     
 1573     opt, arg, val, rest = nil
 1574     nonopt ||= proc {|a| throw :terminate, a}
 1575     argv.unshift(arg) if arg = catch(:terminate) {
 1576       while arg = argv.shift
 1577         case arg
 1578           # long option                                                                                                                           
 1579           when /\A--([^=]*)(?:=(.*))?/m
 1580             opt, rest = $1, $2

In other words, the regex match on the argument is failing due to this encoding issue.

When I have time (not right away, unfortunately), I'll put some code into that module to do encoding of the arg variable, to see if this might fix the problem.

FURTHER UPDATE: I am running under Ubuntu 20.0.4, and the version of ruby that's offered is 2.7.0. I also managed to get 2.7.1 running on my ancient debian 8 box. This error occurs in both environments. I would have to install a newer version of ruby or compile it from source before I could try version 2.7.7 or version 3.x.

YET ANOTHER UPDATE: I had some unexpected spare time, and so I build ruby-3.3.0 from source and re-ran the test. I got the same error!

% /opt/local/rubies/ruby-3.3.0/bin/ruby ./rtest.rb *
/opt/local/rubies/ruby-3.3.0/lib/ruby/3.3.0+0/optparse.rb:1640:in `===': invalid byte sequence in UTF-8 (ArgumentError)
    from /opt/local/rubies/ruby-3.3.0/lib/ruby/3.3.0+0/optparse.rb:1640:in `block in parse_in_order'
    from /opt/local/rubies/ruby-3.3.0/lib/ruby/3.3.0+0/optparse.rb:1636:in `catch'
    from /opt/local/rubies/ruby-3.3.0/lib/ruby/3.3.0+0/optparse.rb:1636:in `parse_in_order'
    from /opt/local/rubies/ruby-3.3.0/lib/ruby/3.3.0+0/optparse.rb:1630:in `order!'
    from /opt/local/rubies/ruby-3.3.0/lib/ruby/3.3.0+0/optparse.rb:1739:in `permute!'
    from /opt/local/rubies/ruby-3.3.0/lib/ruby/3.3.0+0/optparse.rb:1764:in `parse!'
    from ./rtest.rb:8:in `<main>'

However, I now think the error occurs because the filename is encoded in an unusual manner. If I do echo * in that directory, I see this, which is what I expect:

% echo *
Äfoo rtest.rb

However, if I do /bin/ls in the same directory, I see this:

% /bin/ls *
''$'\304''foo'   rtest.rb

And even the OS can't recognize the file with the name specified as follows ...

% /bin/cat 'Äfoo'
/bin/cat: Äfoo: No such file or directory

But if I use the longer, encoded file name, the OS has no trouble accessing the file ...

% /bin/cat ''$'\304''foo
File contents
File contents

The ls command seems to know how to encode the Äfoo filename into ''$'\304''foo, but ruby doesn't seem to know how to do this.



product validation failed error while using post api

I have created post and get request for product in node express. get request API is working fine but post request through an error:

models.js file

const mongoose=require("mongoose")

const produdctSchema=new  mongoose.Schema({
    name: {
        type: String,
        required:true,
        trim: true,
      },
    description:{
        type:String,
        required:[true, "please enter product description"]
    },
    price:{
        type:Number,
        required:[true, "please enter product price"]
    },
    rating:{
        type:Number,
        default:0
    },

    category:{
       type:String,
        required:true
    },
    stock:{
        type:Number,
        requird:true,
        default:1
    },
    numOfReview:{
        type:Number,
        default:0,

    },
   
   createdAt:{
     type:Date,
     default:Date.now()
 }
})

module.exports=mongoose.model("product",produdctSchema)

routes.js file

const express = require("express");
const {allproductController,createProductController}=require("../controllers/productControllers.js")

const router=express.Router();

router.route("/product/new").post(createProductController);
router.route("/products").get(allproductController)

module.exports= router

controllers.js file

const Product= require("../models/productModel.js")

exports.createProductController=async (req,res,next)=>{
    try {
   
        const product = await Product.create(req.body);
        res.status(201).json({
            success:true,
            product
        })
    } catch (error) {
        res.status(500).json({
            success:false,
            message:error.message
        })
    }
};

exports.allproductController=async(req,res)=>{
    try {
        const products=await Product.find();
        res.status(200).json({message:"success",data:products})
    } catch (error) {
        res.status(500).json({message:"error"})
    }

}

get API is working but when post API call it through an catch error in post man
"product validation failed: category: Path category is required., price: please enter product price, description: please enter product description, name: Path name is required." }



2023-02-26

How to select another radio button through selenium

enter image description here

I already try everything but could run

driver.manage().window().maximize();
    
        WebElement ab = driver.findElement(By.xpath("//div[@class='userSelectTabsLinks' ][contains(.,'Company')]/input[@name='seller_type']"));
        Thread.sleep(1000);
        ab.click();


HTTP error 403 while using role based authorization

My project is using JDK 8, Spring Boot 2.7.8 (I'm stuck with Java 8). I am successfully using Google OAuth2 for authentication. Users can log into and out of my site. They even have to authenticate to get to a /secure page and that works.

However, when I try to incorporate role and/or authorities, I can only get HTTP 403 errors. I have a database with groups, users, group_members, etc, just like JdbcUserDetailsManager wants, and they're filled with data, as shown below.

Two different users in the database

How can I get this to work? Below are code snippets.

@Configuration
public class SecurityConfig {
  @Autowired
  DataSource dataSource;

  @Bean
  public UserDetailsService userDetailsService() throws Exception {
    JdbcUserDetailsManager jdbcUserDetailsManager = new JdbcUserDetailsManager();

    jdbcUserDetailsManager.setDataSource(this.dataSource);

    return jdbcUserDetailsManager;
  }

  @Bean
  public SecurityFilterChain filterChain(
    HttpSecurity http) throws Exception {

    http
        .cors()
            .and()
        .csrf().disable()
        .authorizeRequests()
            .antMatchers("/secure/admin/**").hasAuthority("ADMIN")
            .antMatchers("/secure/**").authenticated()
            .antMatchers("/**").permitAll()
            .antMatchers("/logout").permitAll()
            .and()
        .sessionManagement()
            .sessionCreationPolicy(SessionCreationPolicy.ALWAYS)    
            .and()//
        .userDetailsService(userDetailsService())
        .oauth2Login()
            .loginPage(SecurityController.LOGIN_PAGE_MAPPING)//
            .defaultSuccessUrl(SecurityController.LOGIN_SUCCESS_MAPPING)
            .failureUrl("/login-failure-page")
            .and()
        .exceptionHandling()
            .accessDeniedPage("/access-denied-page")
            .and()
        .userDetailsService(userDetailsService())
        .logout()
            .logoutUrl("/logout")
            .clearAuthentication(true)
            .invalidateHttpSession(true)
            .deleteCookies("JSESSIONID")
        .logoutSuccessUrl(SecurityController.LOGOUT_PAGE_MAPPING).permitAll();

    // For Firefox and h2-console
    http
        .headers()
            .frameOptions().disable();

    return http.build();
  }
}

Below are excerpts from the log file.

2023-02-24 09:36:57.391 DEBUG 36STXT2 --- [io-8600-exec-10] s.s.w.c.SecurityContextPersistenceFilter : Cleared SecurityContextHolder to complete request
2023-02-24 09:36:57.391 DEBUG 36STXT2 --- [nio-8600-exec-4] s.s.w.c.SecurityContextPersistenceFilter : Cleared SecurityContextHolder to complete request
2023-02-24 09:36:57.452 DEBUG 36STXT2 --- [nio-8600-exec-5] o.s.s.w.FilterChainProxy                 : Securing GET /secure/admin/images/ui-bg_highlight-soft_100_deedf7_1x100.png
2023-02-24 09:36:57.452 DEBUG 36STXT2 --- [nio-8600-exec-5] w.c.HttpSessionSecurityContextRepository : Retrieved SecurityContextImpl [Authentication=OAuth2AuthenticationToken [Principal=Name: [...], Granted Authorities: [[ROLE_USER, SCOPE_https://www.googleapis.com/auth/userinfo.email, SCOPE_https://www.googleapis.com/auth/userinfo.profile, SCOPE_openid]], User Attributes: [{at_hash=..., sub=..., email_verified=true, iss=https://accounts.google.com, given_name=..., locale=en, nonce=..., picture=..., aud=[...apps.googleusercontent.com], azp=...apps.googleusercontent.com, name=..., exp=2023-02-24T18:36:54Z, family_name=..., iat=2023-02-24T17:36:54Z, email=...}], Credentials=[PROTECTED], Authenticated=true, Details=WebAuthenticationDetails [RemoteIpAddress=127.0.0.1, SessionId=...], Granted Authorities=[ROLE_USER, SCOPE_https://www.googleapis.com/auth/userinfo.email, SCOPE_https://www.googleapis.com/auth/userinfo.profile, SCOPE_openid]]]
2023-02-24 09:36:57.452 DEBUG 36STXT2 --- [nio-8600-exec-5] s.s.w.c.SecurityContextPersistenceFilter : Set SecurityContextHolder to SecurityContextImpl [Authentication=OAuth2AuthenticationToken [Principal=Name: [...], Granted Authorities: [[ROLE_USER, SCOPE_https://www.googleapis.com/auth/userinfo.email, SCOPE_https://www.googleapis.com/auth/userinfo.profile, SCOPE_openid]], User Attributes: [{at_hash=..., sub=..., email_verified=true, iss=https://accounts.google.com, given_name=..., locale=en, nonce=..., picture=https://lh3.googleusercontent.com/a/AGNmyxZaS0UTnXNuuvQh-HJ6ksu_COG5bFPQj4VZq5X7=s96-c, aud=[411774966392-ct0mr3fbeitc10svg2c3mdotsidprmke.apps.googleusercontent.com], azp=...apps.googleusercontent.com, name=..., exp=2023-02-24T18:36:54Z, family_name=..., iat=2023-02-24T17:36:54Z, email=...}], Credentials=[PROTECTED], Authenticated=true, Details=WebAuthenticationDetails [RemoteIpAddress=127.0.0.1, SessionId=...], Granted Authorities=[ROLE_USER, SCOPE_https://www.googleapis.com/auth/userinfo.email, SCOPE_https://www.googleapis.com/auth/userinfo.profile, SCOPE_openid]]]
2023-02-24 09:36:57.452 DEBUG 36STXT2 --- [nio-8600-exec-5] o.s.w.s.h.SimpleUrlHandlerMapping        : Mapped to ResourceHttpRequestHandler [classpath [META-INF/resources/], classpath [resources/], classpath [static/], classpath [public/], ServletContext [/]]
2023-02-24 09:36:57.452 DEBUG 36STXT2 --- [nio-8600-exec-5] o.s.s.w.a.i.FilterSecurityInterceptor    : Failed to authorize filter invocation [GET /secure/admin/images/ui-bg_highlight-soft_100_deedf7_1x100.png] with attributes [hasAuthority('ROLE_ADMIN')]
2023-02-24 09:36:57.453 DEBUG 36STXT2 --- [nio-8600-exec-5] o.s.s.w.a.AccessDeniedHandlerImpl        : Forwarding to /access-denied-page with status code 403
2023-02-24 09:36:57.453 DEBUG 36STXT2 --- [nio-8600-exec-5] o.s.w.s.DispatcherServlet                : "FORWARD" dispatch for GET "/google-sec-demo/access-denied-page", parameters={}


Query for returning edge

MATCH (a:Person)-[l:workWith]-(b:Person) RETURN a, l, b

If I execute a query and it returns three values (start node, edge, and end node), how can I modify the query to retrieve only the information about the edge



Discord.js Voice Not Playing Audio While AudioPlayerStatus is Playing

I can't seem to figure out why my bot isn't playing audio. I have installed all the necessary dependencies and below is the dependency log and code. Incase it was the audio files, I tried different paths and youtube links and nothing seems to be working.

The block of code is confirmed getting ran since console logs but no audio comes through. Bot joins vc with no issue.

Core Dependencies
- @discordjs/voice: 0.14.0
- prism-media: 1.3.4

Opus Libraries
- @discordjs/opus: 0.8.0
- opusscript: 0.0.8

Encryption Libraries
- sodium-native: 4.0.1
- libsodium-wrappers: 0.7.11
- tweetnacl: not found

FFmpeg
- version: 5.0.1-essentials_build-www.gyan.dev
- libopus: yes

voiceStateUpdate.js event file below

const { Events } = require('discord.js');
const { joinVoiceChannel, AudioPlayerStatus } = require('@discordjs/voice');




module.exports = {
    name: Events.VoiceStateUpdate,
    async execute(oldState, newState) {

        //Check if muting / unmuting
        //Check if deafening / undeafening
        //Check if channel Id is null i.e. if I'm leaving not arriving
        if (newState.selfMute == oldState.selfMute && newState.selfDeaf == oldState.selfDeaf && newState.channelId != null && newState.member.user.id == '188765194246291468'){
            console.log("Syan has entered");

            const fs = require('node:fs');
            const path = require('node:path');

            const seinPath = path.join(__dirname, '../seinfeld_bites');
            const seinFiles = fs.readdirSync(seinPath).filter(file => file.endsWith('.mp3'));

            const { createAudioResource } = require('@discordjs/voice');

            //const { generateDependencyReport } = require('@discordjs/voice');

            const connection = joinVoiceChannel({
                channelId: newState.channelId,
                guildId: newState.guild.id,
                adapterCreator: newState.guild.voiceAdapterCreator,
            });

            const { createAudioPlayer } = require('@discordjs/voice');

            const player = createAudioPlayer();

            const resource = createAudioResource(seinFiles[1]);
            player.play(resource);
            connection.subscribe(player);

            player.on(AudioPlayerStatus.Playing, () => {
                console.log('Playing');
            })

            //console.log(generateDependencyReport());

            //console.log(connection);
        }

    },
};

I've gone over all the dependencies, tried different audio resources and tried it on different servers. Any help would be fantastic. I have the GUILD_VOICE_STATES in my index.js and the bot is joining vc so I don't think it could be that.



How does post data work with Common Lisp?

I have a post route:

(defroute admin-post ("/admin" :method :post)
    (&post client db email)
  (let ((c (write-to-string client))
        (d (write-to-string db))
        (res (by-email email)))
    (render-template* *admin.html* nil
      :title "Admin Area"
      :clientName c
      :dbName d
      :db-res res)))

The value of email is processed by the by-email function successfully. But the c and d values are nil.

ive also tried without write-to-string but it returns a blank value to the page.

UPDATE

Here is my html. The form names are the same as the defroute params:

<form id="form" action="/admin" method="post">
  
  <input type="text" name="client"  placeholder="Enter Client Name"/>

  <input type="text" name="db" placeholder="Enter DB name"/>

  <input type="submit" value="Send"/>
</form>



Nswag adds null check for nullable/optional parameters

I have basically same issue as this (details here on Github) but with C# client - [FromForm] SomeObject x on controller has some nullable (optional) parameters and generated client generated by Nswag has null checks in place like this:

public virtual async System.Threading.Tasks.Task<Attachment> UploadAsync(int? idProject = null, int? idTicket = null...
...
if (idProject == null) throw new System.ArgumentNullException("idProject");
else
{
    content_.Add(new System.Net.Http.StringContent(ConvertToString(idProject, System.Globalization.CultureInfo.InvariantCulture)), "IdProject");
}
...

Both original model (from API project) and generated one in client project have those fields as nullable and function call accepts nullable values.

JSON schema from swagger looks like this:

"/Attachment/Upload": {
  "post": {
    "tags": [
      "Attachment"
    ],
    "requestBody": {
      "content": {
        "multipart/form-data": {
          "schema": {
            "required": [
              "Name"
            ],
            "type": "object",
            "properties": {
              "IdProject": {
                "type": "integer",
                "format": "int32"
              },
              "IdTicket": {
                "type": "integer",
                "format": "int32"
              },...

I've tried setting "queryNullValue": "" in openApiToCSharpClient but it does not help. How to disable those checks in generated client? I must use [FromForm] since I'm sending both file(s) and some additional data with them.

Nswag file (generator settings only):

"openApiToCSharpClient": {
  "generateClientInterfaces": true,
  "GenerateClientClasses": true,
  "useBaseUrl": false,
  "namespace": "xxxxxxxx.APIClient",
  "className": "{controller}Client",
  "operationGenerationMode": "MultipleClientsFromFirstTagAndOperationName",
  "jsonLibrary": "SystemTextJson",
  "generateDtoTypes": true,
  "disposeHttpClient": true,
  "injectHttpClient": true,
  "httpClientType": "System.Net.Http.HttpClient",
  "UseHttpRequestMessageCreationMethod": false,
  "generateBaseUrlProperty": false,
  "generateOptionalParameters": true,
  "parameterArrayType": "System.Collections.Generic.IReadOnlyList",
  "responseArrayType": "System.Collections.Generic.IReadOnlyList",
  "generateOptionalPropertiesAsNullable": true,
  "generateNullableReferenceTypes": true,
  "output": "Client.g.cs",
  "generateExceptionClasses": true,
  "dateType": "System.DateTime",
  "dateTimeType": "System.DateTime",
  "queryNullValue": "",
  "additionalNamespaceUsages": [
    "global::APIClient"
  ]
}


2023-02-25

Using Mockk to mock out a singleton object to ignore Auth journey

I am using Mockk and I have the need to intercept when an API client is being created.

The API client does a bunch of REST stuff that I don't want to happen inside of its constructor. I have tried a bunch of things but can't seem to find a way to not actually run the constructor and just return something.

I don't want to actually run anything when the object is created. Is this possible?

I've tried:

Class I want to mock:

class TestApi(config) {
   auth = Auth.authenticate(config) // Don't want this specifically to run
}

Caller:

fun createClient() {
    return TestApi(ConfigObj())
}

Then in the test

@Test
fun `sample code`() {
  mockkConstructor(TestApi::class)
  every { anyConstructed<TestApi>() } returns FakeInstance()
  
  // other test stuff always fails as TestApi() still runs the full init with the auth flow
}


Set Windows shortcut arguments path relative

My goal is to create a portable PDF that will execute with the portable navigator I include in the package. The problem is that the navigator's shortcut changes and sets itself relative path correctly, but the arguments on the target field (the PDF file I want the shortcut navigator to open when clicked) remains on the old path.

The challenge is to get the argument path relative to wherever the shortcut is placed (both shortcut and folder with nav and PDF are moved together).

I've tried to set a relative path on the argument in the target field to the Start in field, such as shown in the following example, but it won't work:

Target-> E:\DATA\GoogleChromePortable.exe" .\myPDF.pdf Start in-> E:\DATA

Both myPDF and myShortcut are placed in the same folder (DATA), and the Target and Start in fields become relative according to if changed, their paths adapt to the new location.



SQL Query Optimisation for multiple table jons with millions of records

Can anyone suggest me a way to optimise the given query? It has multiple joins and if we try it with larger dataset of 10M records. The query takes much amount of time.

SELECT 
    AUDM.distributeTS AS distributionDate,
    ASMT.name,
    ACM.studentid,
    AUDM.users AS distributedTo,
    ROUND(AUM.totaluser * 100 / AUDM.users) AS participation,
    ROUND(AUM.score * 5 / (AUM.totaluser * AQM.qi)) AS performance
FROM
    (SELECT 
        name, assessmentId
    FROM
        Assessment
    WHERE
        type IN ('PSYCHOMETRIC' , 'QUANTITATIVE', '')
            AND removed = FALSE) ASMT
        LEFT JOIN
    (SELECT 
        studentid, assessmentId
    FROM
        AssessmentCreatorMap) ACM ON ACM.assessmentId = ASMT.AssessmentId
        LEFT JOIN
    (SELECT 
        assessmentId, COUNT(assessmentId) AS qi
    FROM
        AssessmentQuestionMap
    GROUP BY assessmentId) AQM ON AQM.assessmentId = ASMT.assessmentId
        LEFT JOIN
    (SELECT 
         COUNT(userId) AS users, distributeTS, assessmentId
    FROM
        AssessmentUserDistributeMap 
    GROUP BY assessmentId) AUDM ON AUDM.assessmentId = ASMT.assessmentId
        LEFT JOIN
    (SELECT 
        assessmentId,
            COUNT(assessmentId) AS totaluser,
            SUM(assessmentScore) AS score
    FROM
        AssessmentUserMap
    JOIN Student ON AssessmentUserMap.studentId = Student.studentid
    WHERE
        enrollmentDate IS NOT NULL
            AND isDeleted = FALSE
    GROUP BY assessmentId) AUM ON AUM.assessmentId = ASMT.assessmentId
ORDER BY ASMT.AssessmentId DESC
LIMIT 0 , 15;

explain yields the following result.

'1', 'PRIMARY', 'Assessment', NULL, 'index', NULL, 'PRIMARY', '4', NULL, '1', '5.00', 'Using where; Backward index scan'
'1', 'PRIMARY', 'AssessmentCreatorMap', NULL, 'ref', 'fk_AssessmentCreatorMap_aid_idx', 'fk_AssessmentCreatorMap_aid_idx', '5', 'OustMe_UAT.Assessment.AssessmentId', '1', '100.00', NULL
'1', 'PRIMARY', '<derived4>', NULL, 'ref', '<auto_key0>', '<auto_key0>', '5', 'OustMe_UAT.Assessment.AssessmentId', '10', '100.00', NULL
'1', 'PRIMARY', '<derived5>', NULL, 'ref', '<auto_key0>', '<auto_key0>', '5', 'OustMe_UAT.Assessment.AssessmentId', '601', '100.00', NULL
'1', 'PRIMARY', '<derived6>', NULL, 'ref', '<auto_key0>', '<auto_key0>', '5', 'OustMe_UAT.Assessment.AssessmentId', '10', '100.00', NULL
'6', 'DERIVED', 'AssessmentUserMap', NULL, 'ALL', 'fk_AssessmentUserMap_assessmentid_idx,fk_aum_studentid,idx_AssessmentUserMap_assessmentId_enrollmentDate,idx_AssessmentUserMap_assessmentId_studentid', NULL, NULL, NULL, '1055', '90.00', 'Using where; Using temporary'
'6', 'DERIVED', 'Student', NULL, 'eq_ref', 'studentid_UNIQUE,idx_Student_studentid,fk_student_isdel', 'studentid_UNIQUE', '182', 'OustMe_UAT.AssessmentUserMap.studentid', '1', '50.00', 'Using index condition; Using where'
'5', 'DERIVED', 'AssessmentUserDistributeMap', NULL, 'index', 'fk_AssessmentUserDistributeMap_aid_idx,idx_AssessmentUserDistributeMap_assessmentId_userId,idx_assessmentUserDistributeMap_userId_assessmentId', 'fk_AssessmentUserDistributeMap_aid_idx', '5', NULL, '397282', '100.00', NULL
'4', 'DERIVED', 'AssessmentQuestionMap', NULL, 'index', 'fk_AssessmentQuestionMap_aid_idx', 'fk_AssessmentQuestionMap_aid_idx', '5', NULL, '3308', '100.00', 'Using index'

most of the tables have the indexes already . Please comment if there is any need to add a new index or how can we rewrite the query to produce the same resultset.



compensate transactions in saga for failures

i have a "compensatable transaction" tx that is part of a choreography saga and was wondering how to deal with compensation when a bug is introduced in the system. It is clear that if there is a business requirement not fullfiled by tx an *FailedEvent must be emitted in order to start the compensation action chain, but should an event be published as a result of a failure (null pointer, out of memory, and so) ? In my opinion this should be treated as a bug, compensation is not fired and shuould be fixed with a manuall process. This forces me to add some generic error event in a global exception handler. Not sure about it.. Thanks



Scraping data from https://ift.tt/Fq2rypk in Python

I'm trying to follow along the steps from this article to scrape data from the transfermarkt website but I'm not getting the desired output. It seems some of the classes have changed since the article was written so I've had to change

Players = pageSoup.find_all("a", {"class": "spielprofil_tooltip"}) to

Players = pageSoup.find_all("td", {"class": "hauptlink"})

from bs4 import BeautifulSoup
import requests
import pandas as pd

headers = {'User-Agent': 
           'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/Version 110.0.5481.100 Safari/537.36'}

page = "https://www.transfermarkt.co.uk/transfers/transferrekorde/statistik/top/plus/0/galerie/0?saison_id=2000"
pageTree = requests.get(page, headers=headers)
pageSoup = BeautifulSoup(pageTree.content, 'html.parser')

Players = pageSoup.find_all("td", {"class": "hauptlink"})
Values = pageSoup.find_all("td", {"class": "rechts hauptlink"})

PlayersList = []
ValuesList = []

for i in range(0,25):
    PlayersList.append(Players[i].text)
    ValuesList.append(Values[i].text)
    
df = pd.DataFrame({"Players":PlayersList,"Values":ValuesList})

df.head(10)

The problem with this is it finds other classes of this type and adds them to the Players variable, eg Players[0].text returns '\nLuís Figo ' and Players[1].text returns '\nReal Madrid' because team names are also the same class as Player names. How can I select the first hauptlink class or somehow differentiate which one I want if they are the same?



How to calculate date difference from two columns but with different rows and a condition?

Based on the example of dataframe below, I would like to calculate difference between two datetime for certain index and its cumulative. The expected results are as in the column diff_days and cum_diff days

index date_a date_b diff_days cum_diff_days
1 1/1/2023 NaT NaT -
1 NaT NaT NaT -
1 NaT 3/1/2023 2 2
2 4/1/2023 NaT NaT -
2 NaT NaT NaT -
2 NaT 6/1/2023 2 4
3 7/1/2023 NaT NaT -
3 NaT 8/1/2023 1 5
3 9/1/2023 NaT NaT -
3 NaT NaT NaT -
3 NaT 11/1/2023 2 7

I have checked the other post where it calculates the difference between two dates, unfortunately that one is when the date is in the same row. For my case, I wanted to understand how to calculate the dates if it's on different rows at different column since just subtracting it with df['diff_days'] = df['date_a'] - df['date_b'] will produce aNaTresults. I really appreciate if there is someone enlighten me on this problem.



2023-02-24

How to generate the same random number sequence within each thread?

I have a code that converts an image with 32 output layers, from an AI segmentation model output, into a single layer where each pixel in each layer has a probability proportional to its score to make to this single layer. In order to do that, I need to generate a random float number to figure out each of the 32 layers is going to be the winner.

When I run this code in a single thread, it generates the same output every single time. However, when I use it with OMP (to make it faster), it generates a different output every time, even when I make the random number generator private to each thread and initialize it with the same seed (for each row). I also tried to hardcode the seed to 0 and it did not solve the problem. It is like one thread is interfering with the sequence of the numbers in the other one.

I need this code to generate consistently the same result every time in order to make it easier to test the output. Any idea?

    cv::Mat prediction_map(aiPanoHeight, aiPanoWidth, CV_8UC1);
#pragma omp parallel for schedule(dynamic, aiPanoHeight/32)
    for (int y=0;y<aiPanoHeight;++y){
        static std::minstd_rand0 rng(y); // update: the error was here in the static declaration
        std::uniform_real_distribution<float> dist(0, 1);
        for (int x=0;x< aiPanoWidth;++x){
            float values[NUM_CLASSES];
            // populate values with the normalized score for each class, so that the total is 1
            float r = dist(rng);
            for (int c = 0; c < NUM_CLASSES; ++c)
            {
                r -= values[c];
                if(r<=0) {
                    prediction_map.at<uchar>(y, correctedX) = int(aiClassesLUT[c]); // paint prediction map with the corresponding color of the winning layer
                    break;
                }
            }
        }
    }



How t update sqlite3 database connected to a treeview in tkinter

I am trying to update a sqlite3 database from a tkinter treeview. It should be relatively simple, but I am stuck

My program consists of two files, one with the functions for the backend, the other with just the tkinter layout and its own functions for the app.

The function in the backend file works fine and I have tested it:

def update_book(book_id, book_title, book_author, book_qty):
    db = sqlite3.connect("bookstore.db")
    cursor = db.cursor()
    cursor.execute("""UPDATE books
    SET title = ?, author = ?, qty = ?
    WHERE id = ?""",
    (book_title, book_author, book_qty, book_id))
    db.commit()
    db.close()

This instead is the function from tkinter layout file, responsible to update the entry. At the moment it updates just the entry on the treeview, but NOT in the database. I tried to print the backed function (update_book()) to see the results, but it return None. I do not understand what am I doing wrong... By the way, 'table' is the Treewview widget.

def update_button():
    selected = table.focus()
    data = table.item(selected, "values")
    table.item(selected, values=(id_text.get(), title_text.get(), author_text.get(), qnty_text.get()))
    print(update_book(id_text.get(), title_text.get(), author_text.get(), qnty_text.get()))

Any help would be appreciated. As I am new to programming, if possible please do not include super difficult stuff to understand.

Thank you



2023-02-23

Avalonia / LibVLCsharp support for iOS, Android and WASM

I'm planning to create a cross platform (Windows, Linux, macOS, Android, iOS, Wasm) audio player using latest AvaloniaUI along with LibVLCSharp. Unfortunately only support for Windows, Linux and macOS is listed for Avalonia.

I think that this might be a lack of documentation only, since Avalonia pretty recently introduced Android and iOS support officially.

So how is the state of this? Would it be possible to create a REAL cross platform player for all the listed platforms with LibVLCSharp? And if not, is there an alternative, that could be used with AvaloniaUI?

I found these libs for C#, that are (partially) capable of playing audio:

  • LibVLCSharp (unmanaged/wrapper, cross platform including Android + iOS)
  • SharpAudio (mostly managed, cross platform, but poor codec support atm)
  • cscore (unmanaged/wrapper, well designed, development stalled)
  • libsoundio-sharp (unmanaged, pretty raw)
  • ManagedBass (unmanaged/wrapper for BASS, awesome but only free for open source)
  • NAudio (awesome managed library, but windows only atm, although efforts to evolve to cross platform)


Cytoscape How to select a specific GPU card? Nvidia over Intel

My pc has two cards: one Intel and one NVIDIA. Although I select the NVIDIA card in the Cytoscape Desktop OpenCl preferences everything runs in the Intel one.
How can I force it to use the NVIDIA one? Many thanks!

I already made sure I have installed all the appropriate drivers.



how can I do parallel reduction approach to combine the partial sums in c

I have to do partial sums using parallel reduction approach in C. but I doesn't have any idea about it. So, I need guidance of Community to achieve this.

What I need to achieve: for example, computational thread, then in first reduction step add two element in array, thread 4 should wait for thread 0 done, same as thread 5 has to wait for thread 1 done, thread 6 waits for thread 2 & thread 7 should waits for thread 3 done .

now in second step, , thread 6 waits for thread 4 finished, and thread 7 waits for thread 5 finished. , thread 6 waits for thread 4 finished, and thread 7 waits for thread 5 finished.

In the third step, thread 7 waits for thread 6 done. then need to print whole array

Please help me, give me a guidance to achieve this one.



How to pass new data from one page to another page when using Angular HttpClient and InMemoryDbService?

I tried adding a new position on the "Positions" page,enter image description here

and it appeared in the position list.enter image description here

But, I am not sure how to make the new position appear in the dropdown list on the "Employees" page.enter image description here

Here's what I've done so far.

in-memory-data-service.ts

...

export class InMemoryDataService implements InMemoryDbService {
  createDb() {
    const employees = [
      { id: 'abc1', name: 'abc', position: 'Manager'},
      { id: 'abc2', name: 'def', position: 'Manager'},
      { id: 'abc3', name: 'ghi', position: 'Developer'},
      { id: 'abc4', name: 'jkl', position: 'Consultant'},
      { id: 'abc5', name: 'mno', position: 'Developer'},
      { id: 'abc6', name: 'pqr', position: 'IT Intern'}
    ];
    const positions = [
      { position: 'Manager'},
      { position: 'Developer'},
      { position: 'Consultant'},
      { position: 'IT Intern'}
    ];
    return {employees, positions};
  }
  constructor() { }
}

employeePosition.services.ts

...
export class employeePositionService {

  private positionsUrl = 'api/positions';

  httpOptions = {
    headers: new HttpHeaders({ 'Content-Type': 'application/json' })
  };

  constructor(private http: HttpClient) { }

  getEmployeePositions(): Observable<employeePosition[]> {
    return this.http.get<employeePosition[]>(this.positionsUrl).pipe(
      tap(_ => this.log('fetched positions')),
      catchError(this.handleError<employeePosition[]>('getEmployeePositions', []))
    );
  }

  addEmployeePosition(position: employeePosition): Observable<employeePosition> {
    return this.http.post<employeePosition>(this.positionsUrl, position, this.httpOptions).pipe(
      tap((newPosition: employeePosition) => this.log(`added position with id=${newPosition}`)),
      catchError(this.handleError<employeePosition>('addEmployeePosition'))
    );
  }
...

positions.component.html

...
<tr *ngFor="let employeePosition of employeePositions; let i = index" scope="row">
        <td class="text-center">
          
        </td>
        <td class="text-center">
          <button mdbRipple type="button" class="btn btn-primary btn-sm" 
             (click)="editPosition(employeePosition, i)" >Edit</button>&nbsp;
          <button mdbRipple type="button" class="btn btn-primary btn-sm" 
             (click)="deletePosition(employeePosition, i, employeePositions)" >Delete</button>
        </td>
      </tr>
...

positions.component.ts

...
employeePosition: employeePosition | undefined;
employeePositions: employeePosition[] = [];
...
getEmployeePositions(): void {
  this.employeePositionService.getEmployeePositions()
   .subscribe(employeePositions => this.employeePositions = employeePositions);
}

addPosition(size: string = ''): void {
  this.modalRefAdd = this.modalService.open(AddPositionFormComponent, {
    modalClass: size,
    data: {}
  });

  this.modalRefAdd.onClose.subscribe(res => {
    if(res != null) {
      this.employeePositions = [...this.employeePositions, res];
    } else {
      close();
    }
  });
}

add-position-form.component.ts

...
onSubmit() {
    this.positionForm.markAllAsTouched();
    if(this.positionForm.invalid) {
      return
    } else {
      const data = this.positionForm.value;
      console.log(data);
      console.log(data.position);
      this.employeePositionService.addEmployeePosition({position: data.position} as 
      employeePosition).subscribe(result => {
        this.employeePositionService.getEmployeePositions().subscribe(results => console.log(results))
      });

      this.modalRefAdd.close(data);
    }
  }
...

add-user-form.component.html

<div id="add-employee-position">
  <div class="modal-header evonik white-text">
      <h5 class="modal-title text-white">Create New Position</h5>
      <button type="button" class="close pull-right" aria-label="Close" (click)="modalRefAdd.close()"><span aria-hidden="true" style="color: #000000;">×</span></button>
  </div>

  <div class="modal-body m-0 p-50">
      <form [formGroup]="positionForm" (ngSubmit)="onSubmit()">
          <div class="form-group pb-1">
              <label>Position:</label>
          </div>
          <div class="class form-row pb-3">
              <div class="class col">
                  <mdb-form-control>
                      <input mdbValidate mdbInput type="text" formControlName="position" class="form-control" required>
                      <label mdbLabel class="form-label">Enter a new position here</label>
                      <mdb-error *ngIf="position?.invalid && (position?.dirty || position?.touched)">New position is required</mdb-error>
                  </mdb-form-control>
              </div>
          </div>
      </form>
  </div>
  <div class="modal-footer mt-1">
      <button mdbRipple type="button" class="btn btn-outline-primary" (click)="modalRefAdd.close()">Cancel</button>
      <button mdbRipple type="button" class="btn btn-primary" (click)="onSubmit()">Confirm</button>
  </div>

add-user-form.component.ts

...
employeePositions: employeePosition[] = [];
...
get id(): AbstractControl {
    return this.userForm.get('id')!;
   }

   get name(): AbstractControl {
    return this.userForm.get('name')!;
   }

   get position(): AbstractControl {
    return this.userForm.get('position')!;
   }

  ngOnInit(): void {    
    this.getEmployeePositions();
  }

  getEmployeePositions(): void {
    this.employeePositionService.getEmployeePositions()
      .subscribe(employeePositions => this.employeePositions = employeePositions);
  }

  onSubmit() {
    this.userForm.markAllAsTouched();
    if(this.userForm.invalid) {
      return
    } else {
      const data = this.userForm.value;
      console.log(data);
      console.log(data.id);
      this.employeeService.addEmployee({id : data.id, name: data.name, position: data.position} as Employee).subscribe(result => {
        this.employeeService.getEmployees().subscribe(results => console.log(results))
      });
  
      this.modalRefAdd.close(data);
    }
  }
...

I hope someone can assist me with this problem. Thank you in advance.



Why do I get an MobSF Error during setup?

installing (run.bat) MobSF I have this error during installation on a Win10. I have c++ insalled. Do you have any idea why this breaks? run.bat of MobSF Thanks.

I have tried running "clean.bat", installing C++ and insstalling SDK

Microsoft Windows [Versión 10.0.19044.2604]
(c) Microsoft Corporation. Todos los derechos reservados.

C:\Users\DAS>cd C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF

C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF>./setup.bat
"." no se reconoce como un comando interno o externo,
programa o archivo por lotes ejecutable.

C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF>setup.bat
[INSTALL] Checking for Python version 3.8+
[INSTALL] Found Python 3.10.4
[INSTALL] Found pip
Requirement already satisfied: pip in c:\users\das\appdata\local\programs\python\python310\lib\site-packages (23.0.1)
[INSTALL] Found OpenSSL executable
[INSTALL] Found Visual Studio Build Tools
[INSTALL] Creating venv
Requirement already satisfied: pip in c:\users\das\desktop\k\pentesting android\mobile-security-framework-mobsf\venv\lib\site-packages (22.0.4)
Collecting pip
  Using cached pip-23.0.1-py3-none-any.whl (2.1 MB)
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 22.0.4
    Uninstalling pip-22.0.4:
      Successfully uninstalled pip-22.0.4
Successfully installed pip-23.0.1
[INSTALL] Installing Requirements
Collecting wheel
  Downloading wheel-0.38.4-py3-none-any.whl (36 kB)
Installing collected packages: wheel
Successfully installed wheel-0.38.4
Ignoring gunicorn: markers 'platform_system != "Windows"' don't match your environment
Collecting Django>=3.1.5
  Downloading Django-4.1.7-py3-none-any.whl (8.1 MB)
     ---------------------------------------- 8.1/8.1 MB 12.0 MB/s eta 0:00:00
Collecting lxml>=4.6.2
  Downloading lxml-4.9.2-cp310-cp310-win_amd64.whl (3.8 MB)
     ---------------------------------------- 3.8/3.8 MB 12.7 MB/s eta 0:00:00
Collecting rsa>=4.7
  Downloading rsa-4.9-py3-none-any.whl (34 kB)
Collecting biplist>=1.0.3
  Downloading biplist-1.0.3.tar.gz (21 kB)
  Preparing metadata (setup.py) ... done
Collecting requests>=2.25.1
  Downloading requests-2.28.2-py3-none-any.whl (62 kB)
     ---------------------------------------- 62.8/62.8 kB ? eta 0:00:00
Collecting bs4>=0.0.1
  Downloading bs4-0.0.1.tar.gz (1.1 kB)
  Preparing metadata (setup.py) ... done
Collecting colorlog>=4.7.2
  Downloading colorlog-6.7.0-py2.py3-none-any.whl (11 kB)
Collecting macholib>=1.14
  Downloading macholib-1.16.2-py2.py3-none-any.whl (38 kB)
Collecting whitenoise>=5.2.0
  Downloading whitenoise-6.3.0-py3-none-any.whl (19 kB)
Collecting waitress>=1.4.4
  Downloading waitress-2.1.2-py3-none-any.whl (57 kB)
     ---------------------------------------- 57.7/57.7 kB ? eta 0:00:00
Collecting psutil>=5.8.0
  Downloading psutil-5.9.4-cp36-abi3-win_amd64.whl (252 kB)
     ---------------------------------------- 252.5/252.5 kB 15.1 MB/s eta 0:00:00
Collecting shelljob>=0.6.2
  Downloading shelljob-0.6.3-py3-none-any.whl (9.9 kB)
Collecting asn1crypto>=1.4.0
  Downloading asn1crypto-1.5.1-py2.py3-none-any.whl (105 kB)
     ---------------------------------------- 105.0/105.0 kB ? eta 0:00:00
Collecting oscrypto>=1.2.1
  Downloading oscrypto-1.3.0-py2.py3-none-any.whl (194 kB)
     ---------------------------------------- 194.6/194.6 kB 11.5 MB/s eta 0:00:00
Collecting distro>=1.5.0
  Downloading distro-1.8.0-py3-none-any.whl (20 kB)
Collecting IP2Location==8.9.0
  Downloading IP2Location-8.9.0-py3-none-any.whl (16 kB)
Collecting lief>=0.12.1
  Downloading lief-0.12.3-cp310-cp310-win_amd64.whl (4.9 MB)
     ---------------------------------------- 4.9/4.9 MB 12.9 MB/s eta 0:00:00
Collecting http-tools>=2.1.0
  Downloading http-tools-2.1.1.tar.gz (550 kB)
     ---------------------------------------- 550.3/550.3 kB 17.4 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting libsast>=1.5.1
  Downloading libsast-1.5.2.tar.gz (36 kB)
  Preparing metadata (setup.py) ... done
Collecting pdfkit>=0.6.1
  Downloading pdfkit-1.0.0-py3-none-any.whl (12 kB)
Collecting google-play-scraper>=0.1.2
  Downloading google_play_scraper-1.2.3-py3-none-any.whl (28 kB)
Collecting androguard==3.4.0a1
  Downloading androguard-3.4.0a1-py3-none-any.whl (918 kB)
     ---------------------------------------- 918.1/918.1 kB 14.6 MB/s eta 0:00:00
Collecting apkid==2.1.4
  Downloading apkid-2.1.4-py2.py3-none-any.whl (116 kB)
     ---------------------------------------- 116.6/116.6 kB ? eta 0:00:00
Collecting quark-engine==22.10.1
  Downloading quark_engine-22.10.1-py3-none-any.whl (97 kB)
     ---------------------------------------- 97.6/97.6 kB ? eta 0:00:00
Collecting frida==15.2.2
  Downloading frida-15.2.2.tar.gz (11 kB)
  Preparing metadata (setup.py) ... done
Collecting tldextract==3.4.0
  Downloading tldextract-3.4.0-py3-none-any.whl (93 kB)
     ---------------------------------------- 93.9/93.9 kB 5.2 MB/s eta 0:00:00
Collecting openstep-parser==1.5.4
  Downloading openstep_parser-1.5.4-py3-none-any.whl (4.5 kB)
Collecting svgutils==0.3.4
  Downloading svgutils-0.3.4-py3-none-any.whl (10 kB)
Collecting ruamel.yaml==0.16.13
  Downloading ruamel.yaml-0.16.13-py2.py3-none-any.whl (111 kB)
     ---------------------------------------- 111.9/111.9 kB ? eta 0:00:00
Collecting click==8.0.1
  Downloading click-8.0.1-py3-none-any.whl (97 kB)
     ---------------------------------------- 97.4/97.4 kB ? eta 0:00:00
Collecting decorator==4.4.2
  Downloading decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
Collecting asgiref<4,>=3.5.2
  Downloading asgiref-3.6.0-py3-none-any.whl (23 kB)
Collecting tzdata; sys_platform == "win32"
  Downloading tzdata-2022.7-py2.py3-none-any.whl (340 kB)
     ---------------------------------------- 340.1/340.1 kB 10.6 MB/s eta 0:00:00
Collecting sqlparse>=0.2.2
  Downloading sqlparse-0.4.3-py3-none-any.whl (42 kB)
     ---------------------------------------- 42.8/42.8 kB ? eta 0:00:00
Collecting pyasn1>=0.1.3
  Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
     ---------------------------------------- 77.1/77.1 kB ? eta 0:00:00
Collecting urllib3<1.27,>=1.21.1
  Downloading urllib3-1.26.14-py2.py3-none-any.whl (140 kB)
     ---------------------------------------- 140.6/140.6 kB ? eta 0:00:00
Collecting charset-normalizer<4,>=2
  Downloading charset_normalizer-3.0.1-cp310-cp310-win_amd64.whl (96 kB)
     ---------------------------------------- 96.5/96.5 kB 5.7 MB/s eta 0:00:00
Collecting certifi>=2017.4.17
  Downloading certifi-2022.12.7-py3-none-any.whl (155 kB)
     ---------------------------------------- 155.3/155.3 kB 9.7 MB/s eta 0:00:00
Collecting idna<4,>=2.5
  Downloading idna-3.4-py3-none-any.whl (61 kB)
     ---------------------------------------- 61.5/61.5 kB ? eta 0:00:00
Collecting beautifulsoup4
  Downloading beautifulsoup4-4.11.2-py3-none-any.whl (129 kB)
     ---------------------------------------- 129.4/129.4 kB ? eta 0:00:00
Collecting colorama; sys_platform == "win32"
  Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting altgraph>=0.17
  Downloading altgraph-0.17.3-py2.py3-none-any.whl (21 kB)
Collecting mitmproxy==6.0.2
  Downloading mitmproxy-6.0.2-py3-none-any.whl (1.1 MB)
     ---------------------------------------- 1.1/1.1 MB 14.3 MB/s eta 0:00:00
Collecting markupsafe==2.0.1
  Downloading MarkupSafe-2.0.1-cp310-cp310-win_amd64.whl (15 kB)
Collecting pyyaml>=6.0
  Downloading PyYAML-6.0-cp310-cp310-win_amd64.whl (151 kB)
     ---------------------------------------- 151.7/151.7 kB 8.8 MB/s eta 0:00:00
Collecting networkx>=2.2
  Downloading networkx-3.0-py3-none-any.whl (2.0 MB)
     ---------------------------------------- 2.0/2.0 MB 13.0 MB/s eta 0:00:00
Collecting matplotlib>=3.0.2
  Downloading matplotlib-3.7.0-cp310-cp310-win_amd64.whl (7.6 MB)
     ---------------------------------------- 7.6/7.6 MB 12.5 MB/s eta 0:00:00
Collecting pygments>=2.3.1
  Downloading Pygments-2.14.0-py3-none-any.whl (1.1 MB)
     ---------------------------------------- 1.1/1.1 MB 11.8 MB/s eta 0:00:00
Collecting pydot>=1.4.1
  Downloading pydot-1.4.2-py2.py3-none-any.whl (21 kB)
Collecting ipython>=5.0.0
  Downloading ipython-8.10.0-py3-none-any.whl (784 kB)
     ---------------------------------------- 784.3/784.3 kB 12.5 MB/s eta 0:00:00
Collecting yara-python-dex>=1.0.1
  Downloading yara_python_dex-1.0.4-cp310-cp310-win_amd64.whl (130 kB)
     ---------------------------------------- 130.2/130.2 kB ? eta 0:00:00
Collecting kaleido
  Downloading kaleido-0.2.1-py2.py3-none-win_amd64.whl (65.9 MB)
     ---------------------------------------- 65.9/65.9 MB 12.6 MB/s eta 0:00:00
Collecting prettytable>=1.0.0
  Downloading prettytable-3.6.0-py3-none-any.whl (27 kB)
Collecting tqdm
  Downloading tqdm-4.64.1-py2.py3-none-any.whl (78 kB)
     ---------------------------------------- 78.5/78.5 kB ? eta 0:00:00
Collecting plotly
  Downloading plotly-5.13.0-py2.py3-none-any.whl (15.2 MB)
     ---------------------------------------- 15.2/15.2 MB 12.8 MB/s eta 0:00:00
Collecting prompt-toolkit==3.0.19
  Downloading prompt_toolkit-3.0.19-py3-none-any.whl (368 kB)
     ---------------------------------------- 368.4/368.4 kB 23.9 MB/s eta 0:00:00
Collecting pandas
  Downloading pandas-1.5.3-cp310-cp310-win_amd64.whl (10.4 MB)
     ---------------------------------------- 10.4/10.4 MB 12.6 MB/s eta 0:00:00
Collecting rzpipe
  Downloading rzpipe-0.4.0-py3-none-any.whl (11 kB)
Collecting graphviz
  Downloading graphviz-0.20.1-py3-none-any.whl (47 kB)
     ---------------------------------------- 47.0/47.0 kB ? eta 0:00:00
Requirement already satisfied: setuptools in c:\users\das\desktop\k\pentesting android\mobile-security-framework-mobsf\venv\lib\site-packages (from frida==15.2.2->-r requirements.txt (line 26)) (58.1.0)
Collecting requests-file>=1.4
  Downloading requests_file-1.5.1-py2.py3-none-any.whl (3.7 kB)
Collecting filelock>=3.0.8
  Downloading filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting soupsieve>1.2
  Downloading soupsieve-2.4-py3-none-any.whl (37 kB)
Collecting tornado<7,>=4.3
  Downloading tornado-6.2-cp37-abi3-win_amd64.whl (425 kB)
     ---------------------------------------- 425.3/425.3 kB 13.4 MB/s eta 0:00:00
Collecting pyparsing<2.5,>=2.4.2
  Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
     ---------------------------------------- 67.8/67.8 kB ? eta 0:00:00
Collecting passlib<1.8,>=1.6.5
  Downloading passlib-1.7.4-py2.py3-none-any.whl (525 kB)
     ---------------------------------------- 525.6/525.6 kB 16.6 MB/s eta 0:00:00
Collecting pydivert<2.2,>=2.0.3; sys_platform == "win32"
  Downloading pydivert-2.1.0-py2.py3-none-any.whl (104 kB)
     ---------------------------------------- 104.7/104.7 kB ? eta 0:00:00
Collecting protobuf<3.15,>=3.14
  Downloading protobuf-3.14.0-py2.py3-none-any.whl (173 kB)
     ---------------------------------------- 173.5/173.5 kB 10.2 MB/s eta 0:00:00
Collecting sortedcontainers<2.4,>=2.3
  Downloading sortedcontainers-2.3.0-py2.py3-none-any.whl (29 kB)
Collecting zstandard<0.15,>=0.11
  Downloading zstandard-0.14.1.tar.gz (676 kB)
     ---------------------------------------- 676.8/676.8 kB 14.2 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [25 lines of output]
      Traceback (most recent call last):
        File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
          main()
        File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
        File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
        File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in _get_build_requires
          self.run_setup()
        File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\build_meta.py", line 484, in run_setup
          super(_BuildMetaLegacyBackend,
        File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\build_meta.py", line 335, in run_setup
          exec(code, locals())
        File "<string>", line 63, in <module>
        File "C:\Users\DAS\AppData\Local\Temp\pip-install-suw_x1ii\zstandard\setup_zstd.py", line 164, in get_c_extension
          compiler.initialize()
        File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 253, in initialize
          vc_env = _get_vc_env(plat_spec)
        File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\msvc.py", line 210, in msvc14_get_vc_env
          return _msvc14_get_vc_env(plat_spec)
        File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\msvc.py", line 164, in _msvc14_get_vc_env
          raise distutils.errors.DistutilsPlatformError(
      distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
    Traceback (most recent call last):
      File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
        main()
      File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
        json_out['return_val'] = hook(**hook_input['kwargs'])
      File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
        return hook(config_settings)
      File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in get_requires_for_build_wheel
        return self._get_build_requires(config_settings, requirements=['wheel'])
      File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in _get_build_requires
        self.run_setup()
      File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\build_meta.py", line 484, in run_setup
        super(_BuildMetaLegacyBackend,
      File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\build_meta.py", line 335, in run_setup
        exec(code, locals())
      File "<string>", line 63, in <module>
      File "C:\Users\DAS\AppData\Local\Temp\pip-install-suw_x1ii\zstandard\setup_zstd.py", line 164, in get_c_extension
        compiler.initialize()
      File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 253, in initialize
        vc_env = _get_vc_env(plat_spec)
      File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\msvc.py", line 210, in msvc14_get_vc_env
        return _msvc14_get_vc_env(plat_spec)
      File "C:\Users\DAS\AppData\Local\Temp\pip-build-env-dh4tf_vd\overlay\Lib\site-packages\setuptools\msvc.py", line 164, in _msvc14_get_vc_env
        raise distutils.errors.DistutilsPlatformError(
    distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
    [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
[INSTALL] Clean Up
=======================MobSF Clean Script for Windows=======================
Running this script will delete the Scan database, all files uploaded and generated.
C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\scripts
Deleting all uploads
Deleting all downloads
Deleting Static Analyzer migrations
Deleting Dynamic Analyzer migrations
Deleting MobSF migrations
Deleting temp and log files
Deleting Scan database
Deleting Secret file
Deleting Previous setup files
Deleting MobSF data directory: "C:\Users\DAS\.MobSF"
Done
[INSTALL] Migrating Database
Traceback (most recent call last):
  File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\manage.py", line 12, in <module>
    from django.core.management import execute_from_command_line
ModuleNotFoundError: No module named 'django'
Traceback (most recent call last):
  File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\manage.py", line 12, in <module>
    from django.core.management import execute_from_command_line
ModuleNotFoundError: No module named 'django'
Traceback (most recent call last):
  File "C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF\manage.py", line 12, in <module>
    from django.core.management import execute_from_command_line
ModuleNotFoundError: No module named 'django'
Download and Install wkhtmltopdf for PDF Report Generation - https://wkhtmltopdf.org/downloads.html
[INSTALL] Installation Complete
[ERROR] Installation Failed!
Please ensure that all the requirements mentioned in documentation are installed before you run setup script.
Scroll up to see any installation errors.

The 'decorator==4.4.2' distribution was not found and is required by the application

C:\Users\DAS\Desktop\K\Pentesting Android\Mobile-Security-Framework-MobSF>

..........................................................................................................................................................................................................................................................................................................................................................................................................................