2023-10-31

How to make a grammar using python textx exclude tokens

I am trying to parse a network device configuration file, and though I would be going through the whole file, I will not want to include all the terms of the file, but only a subset.

So assume that the configuration file is as follows:

bootfile abc.bin
motd "this is a
message for everyone to follow"
.
.
.
group 1
group 2
.
.
.
permit tcp ab.b.c.d b.b.b.y eq 222
permit tcp ab.b.c.d b.b.b.y eq 222
permit tcp ab.b.c.d b.b.b.y eq 222
.
.
.
interface a
  description this is interface a
  vlan 33 

interface b
  description craigs list
  no shut
  vlan 33
  no ip address
.
.
.

I am only trying to capture the interface line (as is) and the description and vlan line as is - everything else would be ignore. Contents within the interface would be broken into 2 attributes: valid and invalid

so the grammar would look something like this:

Config[noskipsp]:
  interfaces *= !InterfaceDefinition | InterfaceDefinition
;

InterfaceDefinition:
  interface = intf
  valids *= valid
  invalids *= invalid
;

intf: /^interface .*\n/;
cmds: /^ (description|vlan) .*\n/;
invalid: /^(?!(interface|description|vlan) .*\n;

The goal is to attain a python array of interfaces where each interface has 2 attributes: valids, and invalids, each are arrays. valid array would contain either description or vlan entries, and invalid would contain everything else.

There are several challenges that I can't seem to address: 1- How to ignore all the other content that is not an interface definition? 2- How to ensure that all interfaces end up as an interface and not in the invalids attribute of another interface?

Unfortunately - the grammar when parsing the text does not fail, but my understanding how the parser goes through the text appears to be at fault, since it complains the moment it tries to read any text passed the 'interface .*' section.

Additionally, currently I am testing explicitly with a file containing only interface definitions, but the goal is to process full files, targetting only the interfaces, so all other content needs from the grammar side to be able to be discarded.


Updated progress

Originally - after Igor's first answers, I was able to create a grammar that would fully parse successfully a dummy configuration file I had, though the results were not the ones desired - probably due to my ignorance. With Igor's 2nd updated answer, I have decided to refactor the original grammar and simplify it to try to match my sample dummy configuration.

My goal at the model level is to be able to have an object that would resemble something similar to the following pseudo structure

class network_config:

    def __init__(self):
        self.invalid = [] # Entries that do not match the higher level
                       # hierarchy objects
        self.interfaces = []  # Inteface definitions

class Interface:

     def __init__(self):
        self.name = ""
        self.vlans = []
        self.description = ""
        self.junk = []  # This are all other configurations
                        # within the interface that are not
                        # matching neither vlans or description

The dummy configuration file (data to be parsed) looks as follows:

junk
irrelevant configuration line
interface GigabitEthernet0/0/0
   description 3 and again
   nonsense
   vlan 33
   this and that
   vlan 16
interface GigabitEthernet0/0/2
   something for the nic
   vlan 22
   description here and there
! a simple comment
intermiediate
more nonrelated information

interface GigabitEthernet0/0/3
   this is junk too
   vlan 99
don't forget this
interface GigabitEthernet0/0/1
interface GigabitEthernet0/0/9
nothing of interest
silly stuff
some final data

And the new textx grammar that I have created is as follows:

Config:
    (
        invalid*=Junk
        | interfaces*=Interface
    )*
;

Junk:
   /(?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n/  // <- consume that is not a 'vlan', 'description', nor 'interface'
;

Interface:
   'interface' name=/[^\n]+\n/
   ( description+=Description
   | vlans*=Vlan
   | invalids*=InterfaceJunk
   )*
;

Description:
    /description[^\n]+\n/
;

Vlan:
    /vlan[^\n]+\n/
;

InterfaceJunk:
    /(?!((interface)|(vlan)|(description))).[^\n]\n/  // <- consume everything that is not an interface, vlan, or description
;

To my surprise when I tried to run against it - I noticed that it was going into an infinite loop. I also noticed that changing the root rule from

Config:
    (
        invalid*=Junk
        | interfaces*=Interface
    )*
;

*** PARSING MODEL ***
>> Matching rule Model=Sequence at position 0 => *junk irrel
   >> Matching rule Config=ZeroOrMore in Model at position 0 => *junk irrel
      >> Matching rule OrderedChoice in Config at position 0 => *junk irrel
         >> Matching rule __asgn_zeroormore=ZeroOrMore[invalid] in Config at position 0 => *junk irrel
            ?? Try match rule Junk=RegExMatch((?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n) in __asgn_zeroormore at position 0 => *junk irrel
            ++ Match 'junk
' at 0 => '*junk *irrel'
            ?? Try match rule Junk=RegExMatch((?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n) in __asgn_zeroormore at position 5 => junk *irrelevant
            ++ Match 'irrelevant configuration line
' at 5 => 'junk *irrelevant configuration line *'
            ?? Try match rule Junk=RegExMatch((?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n) in __asgn_zeroormore at position 35 => tion line *interface
            -- NoMatch at 35
         <<+ Matched rule __asgn_zeroormore=ZeroOrMore[invalid] in __asgn_zeroormore at position 35 => tion line *interface
      <<+ Matched rule OrderedChoice in Config at position 35 => tion line *interface
      >> Matching rule OrderedChoice in Config at position 35 => tion line *interface
         >> Matching rule __asgn_zeroormore=ZeroOrMore[invalid] in Config at position 35 => tion line *interface
            ?? Try match rule Junk=RegExMatch((?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n) in __asgn_zeroormore at position 35 => tion line *interface
            -- NoMatch at 35
         <<- Not matched rule __asgn_zeroormore=ZeroOrMore[invalid] in __asgn_zeroormore at position 35 => tion line *interface
      <<- Not matched rule OrderedChoice in Config at position 35 => tion line *interface
      >> Matching rule OrderedChoice in Config at position 35 => tion line *interface
         >> Matching rule __asgn_zeroormore=ZeroOrMore[invalid] in Config at position 35 => tion line *interface
            ?? Try match rule Junk=RegExMatch((?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n) in __asgn_zeroormore at position 35 => tion line *interface
            -- NoMatch at 35
         <<- Not matched rule __asgn_zeroormore=ZeroOrMore[invalid] in __asgn_zeroormore at position 35 => tion line *interface

to

Config:
    (
        invalid*=Junk interfaces*=Interface
    )*
;

*** PARSING MODEL ***
>> Matching rule Model=Sequence at position 0 => *junk irrel
   >> Matching rule Config=ZeroOrMore in Model at position 0 => *junk irrel
      >> Matching rule Sequence in Config at position 0 => *junk irrel
         >> Matching rule __asgn_zeroormore=ZeroOrMore[invalid] in Config at position 0 => *junk irrel
            ?? Try match rule Junk=RegExMatch((?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n) in __asgn_zeroormore at position 0 => *junk irrel
            ++ Match 'junk
' at 0 => '*junk *irrel'
            ?? Try match rule Junk=RegExMatch((?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n) in __asgn_zeroormore at position 5 => junk *irrelevant
            ++ Match 'irrelevant configuration line
' at 5 => 'junk *irrelevant configuration line *'
            ?? Try match rule Junk=RegExMatch((?s)(?!((interface)|(vlan)|(description)).)[^\n]+\n) in __asgn_zeroormore at position 35 => tion line *interface
            -- NoMatch at 35
         <<+ Matched rule __asgn_zeroormore=ZeroOrMore[invalid] in __asgn_zeroormore at position 35 => tion line *interface
         >> Matching rule __asgn_zeroormore=ZeroOrMore[interfaces] in Config at position 35 => tion line *interface
            >> Matching rule Interface=Sequence in __asgn_zeroormore at position 35 => tion line *interface
               ?? Try match rule StrMatch(interface) in Interface at position 35 => tion line *interface
               ++ Match 'interface' at 35 => 'tion line *interface* '
               >> Matching rule __asgn_plain=Sequence[name] in Interface at position 44 =>  interface* GigabitEt
                  ?? Try match rule RegExMatch([^\n]+\n) in __asgn_plain at position 45 => interface *GigabitEth
                  ++ Match 'GigabitEthernet0/0/0
' at 45 => 'interface *GigabitEthernet0/0/0 *'
               <<+ Matched rule __asgn_plain=Sequence[name] in __asgn_plain at position 66 => rnet0/0/0 *   descrip
               >> Matching rule ZeroOrMore in Interface at position 66 => rnet0/0/0 *   descrip
                  >> Matching rule OrderedChoice in Interface at position 66 => rnet0/0/0 *   descrip
                     >> Matching rule __asgn_oneormore=OneOrMore[description] in Interface at position 66 => rnet0/0/0 *   descrip
                        ?? Try match rule Description=RegExMatch(description[^\n]+\n) in __asgn_oneormore at position 69 => t0/0/0    *descriptio
                        ++ Match 'description 3 and again
' at 69 => 't0/0/0    *description 3 and again *'
                        ?? Try match rule Description=RegExMatch(description[^\n]+\n) in __asgn_oneormore at position 96 =>  again    *nonsense
                        -- NoMatch at 96

Gave 2 different results, though none of them were as I was hoping for - in the first format, the parser would end up stuck looping looking continuously for the invalid patterns (i.e.; Junk), while in the second format, the parser would be able to get passed seeking for invalids, and at least find the first interface GigabitEthernet0/0/0 though once inside the interface it would, once more, get into an infinite loop.

I was under the impression that doing a ( attr1*=pattern1 | attr2*=pattern2 | attr3*=pattern3) meant that it would try each of the patterns, but it seems to be stuck on pattern1 for as long as pattern1 is not being found. (Ordered choice describes it as such) - I must have something in the grammar that is causing this.

Any hints as to where are my misconceptions?



iOS Safari renders specific font not correctly

I need to use the font Machauer on my project. On desktop (Chrome/Safari) it renders correctly but not on iOS (iPhone/iPad).

Strangely on the download page of the font the same problem is happening - except for the textarea. The font is there displayed with all its details in the correct way. I wonder what is different there.

The website with textarea

textarea displayed correctly:

textarea correctly displayed

this is how it renders outside the textarea:

how it renders outside the textarea

I tried different options like text-Rendering or font-smoothing - it had no impact on iOS Safari.

This how I specified the font and used it:

@font-face {
font-family: 'Machauer';
src: url(fonts/machauer.ttf) format('truetype');
font-weight: normal;
}

h1 {
font-family: 'Machauer';
text-rendering: geometricPrecision;
}


2023-10-30

string.Split for Span?

I was wondering how I may implement, or whether or not there are any workarounds, for a sring.Split() method, but for ReadOnlySpan<T> or Span in C#, because unfortunately ReadOnlySpan<T>.Split() does not seem to exist.

I am not quite sure how to achieve the behaviour I wish for. It probably could be implemented by leveraging the combined power of ReadOnlySpan<T>.IndexOf() and ReadOnlySpan<T>.Slice(), but because even the support for ReadOnlySpan<T>.IndexOf() isn't too great (it isn't possible to specify a startIndex or a Count), I would prefer to avoid this entirely.

I am also aware that the problem with a ReadOnlySpan<T>.Split() method would be, that it isn't possible to return ReadOnlySpan<T>[] or ReadOnlySpan<ReadOnlySpan<T>>, because it is a ref struct and therefore must be stack-allocated, and putting it into any collection would require a heap allocation.

So has anyone any idea, on how I may achieve that?

Edit: It should work, without knowing the amount of parts returned beforehand....



I cannot execute "make clean" in my terminal

Apologies for the inconvinience, it is my first time in stackoverflow.

I am trying to compile a make project, but when i do make clean, the terminal returns this error:

rm -rf build
process_begin: CreateProcess(NULL, rm -rf build, ...) failed.
make (e=2): The system cannot find the file specified.
make: *** [nRF5_SDK_17.0.2_d674dde/components/toolchain/gcc/Makefile.common:79: clean] Error 2

in the makefile.common, line 70 i have this:

ifneq (,$(filter clean, $(MAKECMDGOALS)))

OTHER_GOALS := $(filter-out clean, $(MAKECMDGOALS))
ifneq (, $(OTHER_GOALS))
$(info Cannot make anything in parallel with "clean".)
$(info Execute "$(MAKE) clean \
  $(foreach goal, $(OTHER_GOALS),&& $(MAKE) $(goal))" instead.)
$(error Cannot continue)
else
.PHONY: clean
clean:
    $(RM) $(OUTPUT_DIRECTORY)  -> this is line 79
endif # ifneq(, $(OTHER_GOALS))

else # ifneq (,$(filter clean, $(MAKECMDGOALS)))

what can i do to solve this problem?

thankyou for your effort!

I tried to changle the output directoy.



2023-10-29

Entity Framework Core: how to combine selects and retain order?

What I currently want to achieve is this, but as IQueryable without the need of allocating the result to memory, as the request itself already contains pagination information. I am really not getting my head around it how to solve it, I already tried Union, but that does not retain any order at all.

Basically I want to order the items based on the State and the Updated properties. That's all one table and base.OnQuery returns the DbSet<HKTDownload>. Is it somehow possible to solve that only by OrderBy? If so, how?

    var list = new List<HKTDownload>();
    list.AddRange(
        base.OnQuery(context, request)
            .Where(x => 
                x.State == HKTDownloadState.DownloadContent
             || x.State == HKTDownloadState.Unpack
             || x.State == HKTDownloadState.Repair
             || x.State == HKTDownloadState.MoveFiles)
            .OrderByDescending(x => x.Updated)
            .Include(x => x.DownloadConfig)
    );
    list.AddRange(
        base.OnQuery(context, request)
            .Where(x => x.State == HKTDownloadState.DownloadNZBFile)
            .OrderByDescending(x => x.Updated)
            .Include(x => x.DownloadConfig)
    );
    list.AddRange(
        base.OnQuery(context, request)
            .Where(x => x.State == HKTDownloadState.QueuedForDownloadContent)
            .OrderByDescending(x => x.Updated)
            .Include(x => x.DownloadConfig)
    );
    list.AddRange(
        base.OnQuery(context, request)
            .Where(x => x.State == HKTDownloadState.Queued)
            .OrderByDescending(x => x.Updated)
            .Include(x => x.DownloadConfig)
    );
    list.AddRange(
        base.OnQuery(context, request)
            .Where(x => 
                x.State == HKTDownloadState.Completed 
                || x.State == HKTDownloadState.Error)
            .OrderByDescending(x => x.Updated)
            .Take(200)
            .Include(x => x.DownloadConfig)
    );
    
    return list.AsQueryable();


Position of Hide Sidebar buton in SwiftUI

I was looking at the Apple Design Resources and saw a toolbar that looked like this: enter image description here

where the hide sidebar button is inside the sidebar. However, using SwiftUI, I cannot find how to place it here at all times instead of next to the traffic light buttons. I can add an additional sidebar button before the title but I cannot remove the initial one by the traffic light buttons.

Is there any way to alter the position of this button? Thanks



Error: Couldn't find a navigation object. Is your component inside NavigationContainer? (how to use useNavigation inside Navigation container)

I am writing a react-native MERN chat app and I am adding a profile page so the user can edit or add his data.the I want to access this page by a button and using useNavigation but I got the error ehich I mentioned in the title NOTE: the button is inside the Navigation Container My Code:

import React from 'react';
import storage from './storage';
import { useNavigation } from '@react-navigation/native'
import { NavigationContainer, Screen } from '@react-navigation/native';
import { createNativeStackNavigator } from '@react-navigation/native-stack';
import { LogBox, ActivityIndicator, Button, TouchableOpacity, View } from 'react-native';
import { Menu, MenuItem, MenuDivider } from 'react-native-material-menu';
import Icon from 'react-native-vector-icons/Entypo';
import ProfileScreen from './Profile.Screen';
//rest of the imports...

export default function App() {
  const [savedData, setSavedData] = React.useState(null)
  const [fetching, setFetching] = React.useState(false)
  const [receiver, setReceiver] = React.useState(null)
  const [visible, setVisible] = React.useState(false);

  const navigation = useNavigation();

  //rest of the code...

  return (
    <ThemeProvider>
      <NavigationContainer>
        <Stack.Navigator>
          {isSignedIn && <>
            <Stack.Screen name="Home" component={ChatList} initialParams={params} options={() =>    ({
              title: "myApp", headerRight:
                () => (
                  <View style=>
                    <Menu
                      visible={visible}
                      anchor={
                        <Icon
                          name="dots-three-vertical"
                          size={20}
                          color="black"
                          onPress={() => setVisible(true)}
                        />
                      }
                      onRequestClose={() => setVisible(false)}
                    >
                      <MenuItem onPress={navigation.navigate('Profile')}>My Profile</MenuItem> {/* this is the line*/}
                      <MenuItem onPress={changeTheme}>Change Theme</MenuItem>
                      <MenuItem onPress={() => handleLogout(user)}>Log Out</MenuItem>
                    </Menu>
                  </View>
                )
            })} />
            <Stack.Screen name="Chat" component={Chat} initialParams={params} options={() => ({ title: receiver })} />
            <Stack.Screen name="Users" component={UsersScreen} initialParams={params} />
            <Stack.Screen name="Profile" component={ProfileScreen} initialParams={params} options= />
          </>}

          {!isSignedIn && <>
            <Stack.Screen name="SignIn" component={SignInScreen} initialParams= options={() => ({ title: "myApp" })} />
            <Stack.Screen name="ForgotPassword" component={ForgetPassword} />
          </>}


        </Stack.Navigator>
      </NavigationContainer>
    </ThemeProvider>
  )

}

I tried removing the quotations (I saw it as a solution in some place)(Instead of navigation.navigate('Profile') I used navigation.navigate(Profile) and I tried to remove the onPress function and still got the error based on const navigation = useNavigation() and I tried importing useNavigation beside NavigationContainer but nothing worked

I know this would've worked if there is no menu how can I do this with the menu (NOTE: I know that it should've been wrapped by navigationContainer but I don't know how)



JavaFX TableView text in the cells of the columns seems to jump

When I click the button for showing the pane with the table for the first time (after load) the text in the cells of the columns seems to jump

Maybe it has to do something with the scrollbar that is being added to the table. If I limit the amounts of rows to what can be in the table without the scrollbar the text is not moving when it is being show for the first time. When the table has been show for the first time it is not moving.

Below is example and description on how to reproduce the issue. This is the App.java:

package org.example;

import javafx.application.Application;
import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.geometry.Insets;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.TableColumn;
import javafx.scene.control.TableView;
import javafx.scene.control.cell.PropertyValueFactory;
import javafx.scene.layout.BorderPane;
import javafx.scene.layout.VBox;
import javafx.scene.text.Text;
import javafx.stage.Stage;

public class App extends Application {

    TableView<TestTable> tableViev;
    ObservableList<TestTable> observableListWithTests;

    @Override
    public void start(Stage stage) {

        this.tableViev = new TableView<>();
        this.observableListWithTests = FXCollections.observableArrayList();

        this.tableViev = new TableView<>();
        this.tableViev.setColumnResizePolicy(TableView.CONSTRAINED_RESIZE_POLICY_ALL_COLUMNS);

        TableColumn<TestTable, Integer> test1 = new TableColumn<>("Test 1");
        TableColumn<TestTable, String> test2 = new TableColumn<>("Test 2");
        TableColumn<TestTable, String> test3 = new TableColumn<>("Test 3");
        TableColumn<TestTable, String> test4 = new TableColumn<>("Test 4");

        test1.setStyle("-fx-text-alignment: center; -fx-alignment: center;");
        test2.setStyle("-fx-text-alignment: center; -fx-alignment: center;");
        test3.setStyle("-fx-text-alignment: center; -fx-alignment: center;");
        test4.setStyle("-fx-text-alignment: center; -fx-alignment: center;");

        test1.setCellValueFactory(new PropertyValueFactory<>("test1"));
        test2.setCellValueFactory(new PropertyValueFactory<>("test2"));
        test3.setCellValueFactory(new PropertyValueFactory<>("test3"));
        test4.setCellValueFactory(new PropertyValueFactory<>("test3"));

        for (int x = 0; x < 50; x++) {

            observableListWithTests.add(new TestTable("Test " + x, "Test " + x , "Test " + x, "Test " + x));

        }

        tableViev.getColumns().addAll(test1, test2, test3, test4);
        tableViev.setItems(observableListWithTests);

        VBox centerPane1 = new VBox();
        centerPane1.setPadding(new Insets(20, 20, 20, 10));
        Text text1 = new Text("CenterPane 1");
        centerPane1.getChildren().add(text1);

        VBox centerPane2 = new VBox();
        centerPane2.setPadding(new Insets(20, 20, 20, 10));
        centerPane2.getChildren().add(tableViev);

        Button buttonPane1 = new Button("Pane 1");
        Button buttonPane2 = new Button("Pane 2");

        BorderPane borderPane = new BorderPane();

        borderPane.setCenter(centerPane1);

        buttonPane1.setOnAction(event -> borderPane.setCenter(centerPane1));
        buttonPane2.setOnAction(event -> borderPane.setCenter(centerPane2));

        VBox vBoxWithButtons = new VBox();
        vBoxWithButtons.setPadding(new Insets(20));
        vBoxWithButtons.setSpacing(10);
        vBoxWithButtons.getChildren().addAll(buttonPane1, buttonPane2);

        borderPane.setLeft(vBoxWithButtons);
        var scene = new Scene(borderPane, 640, 480);

        stage.setScene(scene);
        stage.show();
    }

    public static void main(String[] args) {
        launch();
    }

}

And this is the TestTable.java:

package org.example;

public class TestTable {

    private String test1;
    private String test2;
    private String test3;
    private String test4;

    public TestTable(String test1, String test2, String test3, String test4) {

        this.test1 = test1;
        this.test2 = test2;
        this.test4 = test3;
        this.test3 = test4;

    }

    public String getTest1() {

        return this.test1;
    }

    public void setTest1(String test) {

        this.test1 = test;

    }

    public String getTest2() {

        return this.test2;
    }

    public void setTest2(String test) {

        this.test2 = test;

    }

    public String getTest4() {

        return this.test4;
    }

    public void setTest4(String test) {

        this.test4 = test;

    }

    public void setTest3(String test) {

        this.test3 = test;

    }

    public String getTest3() {

        return this.test3;

    }

}

Open the App, "Pane 1" is showing. Click "Pane 2", pane 2 with the table is showing but the text is moving, its moving fast but enough to notice. Now in the code limit the rows to 10 and redo above -> text is not moving.

I have searched for options on how to always show the scrollbar but I don't think that is possible. Maybe there is a solution for loading the pane without showing it?

Using OpenJDK 21. JavaFX 21.

Update: Below screenshots shows the table before and after the moving:

enter image description here

enter image description here

Update 2: More screenshots:

enter image description here

enter image description here



2023-10-28

Can I make the configuration for the analysis in php? on codeQL

I have this error, I read that it can be configured to scan php code, but it fails. What am I doing wrong?

Languages from configuration: php Error: Did not recognize the following languages: php

name: "CodeQL"

on: push: branches: [ "main" ] pull_request: # The branches below must be a subset of the branches above branches: [ "main" ] schedule: - cron: '21 0 * * 4'

jobs: analyze: name: Analyze

runs-on: $
timeout-minutes: $
permissions:
  actions: read
  contents: read
  security-events: write

strategy:
  fail-fast: false
  matrix:
    language: [ 'javascript-typescript' ]

steps:
- name: Checkout repository
  uses: actions/checkout@v3

# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
  uses: github/codeql-action/init@v2
  with:
    languages: php
 

actions#jobsjob_idstepsrun

below for guidance.

- name: Perform CodeQL Analysis
  uses: github/codeql-action/analyze@v2
  with:
    category: "/language:$"


2023-10-27

Cannot connect to Neo4j in Python to push data

I am going slightly crazy trying to understand why I can't push data to my Neo4j database using CYPHER in Python.

I am running basic testing code just to see if I can push data. Here is my testing code:

import logging
from neo4j import GraphDatabase

# Set up logging
logging.basicConfig(level=logging.DEBUG)  # Change the level to ERROR, WARNING, INFO, or DEBUG as needed
log = logging.getLogger("neo4j")

class Neo4jService:
    def __init__(self, uri, user, password):
        self._driver = GraphDatabase.driver(uri, auth=(user, password))
        
    def close(self):
        log.debug("Closing driver.")
        self._driver.close()
        
    def run_queries(self, queries):
        log.debug("Running queries.")
        with self._driver.session() as session:
            for query in queries:
                log.debug(f"Executing query: {query}")
                session.run(query)

try:
    # Initialize the Neo4j service
    URI = "neo4j+s://838f9df7.databases.neo4j.io"
    AUTH = ("neo4j", "my_password")

    log.info("Initializing driver.")
    with GraphDatabase.driver(URI, auth=AUTH) as driver:
        log.info("Verifying connectivity.")
        driver.verify_connectivity()

    summary = driver.execute_query(
        "MERGE (:Person {name: $name})",
        name="Alice",
        database_="neo4j",
    ).summary

    log.info(f"Created {summary.counters.nodes_created} nodes in {summary.result_available_after} ms.")

except Exception as e:
    log.error("An error occurred:", exc_info=True)

Whatever I seem to do, I get an error like this:

neo4j.exceptions.ServiceUnavailable: Unable to retrieve routing information

I have tried manually telling the Neo4j to trust the certificate by saving the certificate as a file and including a trust setting in my code. But this didn't work. I have tried swithcing from neo4j+s to bolt or just neo4j. I have updated neo4j and python. I feel like I have been debugging for about two days. Whatever I do, nothing happens in my Neo4j database. Help appreciated!



Didn't get any dynamic content on web page using view/template engine (Handlebars)

I am using Handlebars template engine. Not getting any error at console but I am not able to display dynamic content in the index.hbs file which is rendered from app.js

app.js

const express = require('express');
const app = express();

app.set('view engine','hbs');   

app.use(express.urlencoded({extended:true}));   //to get the html form data

app.use(express.static('./public'));

app.use(express.json());

app.get('/',(req,res)=>{
    
    res.render('index.hbs'); 

});

app.post('/', async (req,res)=>{

   const currentCity = req.body.city; 
   
   let WRAP_DATA = await getCityWeather(currentCity);  //getCityWeather function return weather by city
   
   console.log('Weather Info = ',WRAP_DATA); // successfully got the WRAP_DATA 
   
   res.render('index.hbs',{WRAP_DATA});
});

At console I got WRAP_DATA successfully

description: 'overcast clouds',
humidity: 53,
visibility: 10,
windSpeed: 2.592,
winddeg: 32

index.hbs

"It is located inside view folder and display HTML&CSS properly only problem is replacing dynamic data". I am mentioning code where I used hbs tags only.

<table>
                <tr class="detail">
                    <td> <img src="../img/humidity.png" class="desc_icon"> </td>
                    <td class="label">Humidity</td>
                    <td class="value">  %</td>
                </tr>
                <tr class="detail">
                    <td> <img src="../img/visibility.png" class="desc_icon"> </td>
                    <td class="label">Visibility</td>
                    <td class="value">  km</td>
                </tr>
                <tr class="detail">
                    <td> <img src="../img/windspeed.png" class="desc_icon"> </td>
                    <td class="label">Wind Speed</td>
                    <td class="value">  km/h</td>
                </tr>
                <tr class="detail">
                    <td> <img src="../img/direction.png" class="desc_icon"> </td>
                    <td class="label">Wind Direction</td>
                    <td class="value">  °</td>
                </tr>
</table>

when I run my project my backend works fine but above mentioned WRAP_DATA values are not displaying at their position in index.hbs



2023-10-26

How to handle deselection event on PHPickerViewController?

I'm trying to register a deselection event on PHPickerViewController but picker() function is being called only on selection event not on deselection event. Not even updateUIViewController() is being called when user makes the deselection.

Here is my code:

//--------------------------------------------------
// COORDINATOR CLASS
//--------------------------------------------------
class Coordinator: NSObject, PHPickerViewControllerDelegate
{
    //--------------------------------------------------
    // VARIABLES
    //--------------------------------------------------
    var parent: ImageChatPicker


    //--------------------------------------------------
    // METHODS
    //--------------------------------------------------
    // INIT
    init(_ parent: ImageChatPicker) {
        self.parent = parent
    }
    //--------------------------------------------------
    // PICKER
    func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
        if results.isEmpty {
            parent.image = nil
            parent.showingImagePicker = false
        }
        // IF NOTHING HAS BEEN LOADED
        guard let provider = results.first?.itemProvider else { return }
        // IF IMAGE HAS BEEN LOADED
        if (provider.canLoadObject(ofClass: UIImage.self)) {
            provider.loadObject(ofClass: UIImage.self) { image, _ in
                DispatchQueue.main.async {
                    self.parent.image = image as? UIImage
                    if self.parent.image == nil {
                        let alertDialog = UIAlertController(title: "", message: "Unable to load image.\nTry different file.", preferredStyle: .alert)
                        alertDialog.addAction(UIAlertAction(title: "Ok", style: .default))
                        // Find the root view controller and present the alert
                        if let viewController = UIApplication.shared.windows.first?.rootViewController {
                            viewController.present(alertDialog, animated: true, completion: nil)
                        }
                    }
                }
            }
        }
    }
}

Does anybody know how to handle the deselection event?



2023-10-25

UiPath Open Browser Activity

What packages must be installed to use the Open Browser activity on UiPath? I cannot seem to find it and I installed the WebApi package. I onl see the "Use application/browser" activity.

After installing the webapi package I expected to find it the open browser activity.



2023-10-24

GCP Colab Enterprise shared VPC connection

We are trying to use the Colab Enterprise offering (in Vertex AI) using a shared VPC (hosted in a different project). There is an organizational policy to block external IPs. I have added the Compute Network user permission to the service agent in the Shared VPC Host project, and the runtime template and the runtime are created successfully. But when I try to connect a notebook to the runtime, it tries connecting until a timeout, after which it fails. I checked the runtime logs, this is what I see:

cos.googleapis.com/container_name: "proxy-agent"
message: failed to list pending requests: 401
Your client does not have permission to the requested URL /tun/m/4592f09221234568f8016274df1b36a14/agent/pending

What can be the issue? I guess something networking or IAM related. If I create a runtime in a normal VPC (inside the same project), then the notebook can connect and it's working fine.



2023-10-23

Screen capture (mediaProjection) on Android 14

I'm trying to adapt my app to run on Android 14. It worked fine on Android 13. But in SDK version 34 I get an exception when I try to start the foreground service.

Caused by: java.lang.SecurityException: 
Starting FGS with type mediaProjection callerApp=ProcessRecord
targetSDK=34 requires permissions: all of the permissions allOf=true
[android.permission.FOREGROUND_SERVICE_MEDIA_PROJECTION] 
any of the permissions allOf=false 
[android.permission.CAPTURE_VIDEO_OUTPUT, android:project_media]
at android.os.Parcel.createExceptionOrNull(Parcel.java:3057)
at android.os.Parcel.createException(Parcel.java:3041)
at android.os.Parcel.readException(Parcel.java:3024)
at android.os.Parcel.readException(Parcel.java:2966)
at android.app.IActivityManager$Stub$Proxy.setServiceForeground(IActivityManager.java:6761)
at android.app.Service.startForeground(Service.java:862)
at .service.ScreenCaptureService.startWithNotification(ScreenCaptureService.java:147)

This message looks like I need to get permission to access the camera. But I don't use the camera at all in my app.

Doc:

The app must set the foregroundServiceType attribute to FOREGROUND_SERVICE_TYPE_MEDIA_PROJECTION in the element of the app's manifest file. For an app targeting SDK version U or later, the user must have granted the app with the permission to start a projection, before the app starts a foreground service with the type android.content.pm.ServiceInfo.FOREGROUND_SERVICE_TYPE_MEDIA_PROJECTION. Additionally, the app must have started the foreground service with that type before calling this API here, or else it'll receive a SecurityException from this API call, unless it's a privileged app.

Manifest:

<uses-permission android:name="android.permission.POST_NOTIFICATIONS" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="Manifest.permission.FOREGROUND_SERVICE_MEDIA_PROJECTION" />

<service
   android:name=".service.ScreenCaptureService"
   android:enabled="true"
   android:exported="false"
   android:foregroundServiceType="mediaProjection"
   android:stopWithTask="false" />

My actions:

  1. Create ScreenCaptureService.
  2. Requesting user permission to capture screen.
  3. If permission is granted, the foreground service starts.
public int onStartCommand(Intent intent, int flags, int startId) {
   final String intent_action = intent.getAction();

   switch (intent_action) {
      case ACTION_START_SERVICE:
        _is_alive = true;
        _window_name = intent.getStringExtra(EXTRA_WINDOW_NAME);
        startActivity(new Intent(this, ScreenCapturePermissionActivity.class)
        .setFlags(Intent.FLAG_ACTIVITY_NEW_TASK));
           break;
      case ACTION_PERMISSION_RESULT:
        handleScreenCapPermResult();
           break;
      case ACTION_STOP_SERVICE:
      default:
        freeServiceResources();
   }

   return START_STICKY;
}
private void startWithNotification() {
        Notification notification =
                new NotificationCompat.Builder(
                    App.context(), 
                    getString(R.string.sys_notification_chan_ID))
                .setSmallIcon(R.drawable.ic_foreground_notification)
                .setOngoing(true)
                .setContentText(getString(R.string.foreground_service_message))
                .setCategory(Notification.CATEGORY_SERVICE)
                .build();


        if (Build.VERSION.SDK_INT > 28)
            startForeground(
              1, 
              notification, 
              ServiceInfo.FOREGROUND_SERVICE_TYPE_MEDIA_PROJECTION);
        else
            startForeground(1, notification);
}

UPDATE: I think this is a bug in the SDK that comes with Android Studio. ContextCompat.checkSelfPermission always returns PERMISSION_DENIED regardless of when permission is requested. I checked all launch options. The result is always the same. And mediaProjection does not start with the received intent.

protected void onCreate(Bundle savedInstanceState) {
   super.onCreate(savedInstanceState);

   ActivityResultLauncher<Intent> startMediaProjection =
      registerForActivityResult(new ActivityResultContracts
             .StartActivityForResult(),
             result -> {
                if (result.getResultCode() == RESULT_OK) {
                    Log.e("DEBUG", "RESULT_OK");
                    if (android.os.Build.VERSION.SDK_INT > 33) {
                       int perm_code = 
                           ContextCompat.checkSelfPermission(
                           this,
                           Manifest
                            .permission
                             .FOREGROUND_SERVICE_MEDIA_PROJECTION);
                       if (perm_code == PackageManager.PERMISSION_GRANTED) {
                         Log.e("DEBUG", "PERMISSION_GRANTED");
                       } else {
                         Log.e("DEBUG", "PERMISSION_DENIED");                 
                       }
                   } else {
                     Log.e("DEBUG", "PERMISSION_GRANTED");
                   }
              else {
                 Log.e("DEBUG", "RESULT_CANCELED");
              }

     creenCaptureService.handlePermission();
     finish();
     });

  final MediaProjectionManager manager =
                     getSystemService(MediaProjectionManager.class);    
  startMediaProjection.launch(manager.createScreenCaptureIntent());
}


2023-10-22

Move onboarding logic into hook/context

I have an onboarding process that requires the user to click a button to acknowledge or dismiss the screens and make it to the regular app.

Currently I have the logic laid out like this, split between my Nav and Onboarding components.

Nav

const [onBoarded, setOnBoarded] = useState(false);

    useEffect(() => {
        const isOnBoarded = async () => {
            const ob = await EncryptedStorage.getItem('ONBOARDED');
            setOnBoarded(!!ob);
        }
        isOnBoarded();
    }, []);

    return (
        <Stack.Navigator
            initialRouteName="onboarding"
        >
            {!onBoarded && <Screen name="onboarding" component={Onboarding} />}
            <Screen name="MainApp" component={MainApp} />
            ...

Onboarding


    const handleOnBoarding = () => {
        EncryptedStorage.setItem('ONBOARDED', 'true');
        navigation.navigate('MainTabNavigator');
    }

   return (
      ...
      <Button onPress={handleOnBoarding}>skip</Button>
      ...
   );

My question is... Is it possible to move this into a hook and keep the same functionality?

I thought it may be possible to do something like

const useOnBoarding = () => {
  const [onBoarded, setOnBoarded] = useState(false);

  useEffect(() => {
    const isOnBoarded = async () => {
      const ob = await EncryptedStorage.getItem('ONBOARDED');
      setOnBoarded(!!ob);
    }
    isOnBoarded();
  }, []);

  const handleOnBoarding = () => {
    EncryptedStorage.setItem('ONBOARDED', 'true');
    setOnBoarded(true);
  }

  return {onBoarded, handleOnBoarding};
}

export default useOnBoarding;

But I can't seem to get it working?


Nav

const {onBoarded} = useOnBoarding();

    return (
        <Stack.Navigator
            initialRouteName="onboarding"
        >
            {!onBoarded && <Screen name="onboarding" component={Onboarding} />}
            <Screen name="MainApp" component={MainApp} />
            ...

Onboarding

 const {handleOnBoarding} = useOnBoarding();

   return (
      ...
      <Button onPress={handleOnBoarding}>skip</Button>
      ...
   );


Loader is not getting hide after download is completed in C# asp.net

I am calling loader on OnClientClick and its loading when the process of downloading in process. But when I try to hide the loader once the process gets completed, it doesn't works. The loader continously displays and loads.

Here is the code.

function showloadingGif_UPLOAD() {
            document.getElementById('ContentPlaceHolder1_divShowLoadingGif').style.display = 'inline';
            return true;
        }

        function HideLoader() {
            document.getElementById('ContentPlaceHolder1_divShowLoadingGif').style.display = 'none';
        }
--loader div

<div id="ContentPlaceHolder1_divShowLoadingGif" class="dvLoader" style="display: none;"> 
                <img id="img2" alt="" src="images/1487.png" />
            </div>


-- button click

<input type="submit" name="ctl00$ContentPlaceHolder1$btnDownloadInfo" value="Report Download" onclick="showloadingGif_UPLOAD();" id="ContentPlaceHolder1_btnDownloadInfo" class="btn btn-primary downnloadReport" />

Also below is the server side code to call the hide function.

protected void btnDownloadInfo_Click(object sender, EventArgs e)
    {
        DataTable dtExcelData = new DataTable();
        try
        {
            CommonUser ObjUser = new CommonUser();
            string strDateFilter = txtDateSelection.Value;
            dtExcelData = ObjUser.GET_EXCEL_REPORT(strDateFilter);


            CommonDB.WriteLog("Dt Count 1 : " + dtExcelData.Rows.Count, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());

            if (dtExcelData != null && dtExcelData.Rows.Count > 0)
            {
                CommonDB.WriteLog("Dt Count 2 : " + dtExcelData.Rows.Count, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());
                DownloadReport(dtExcelData);                    
            }
            else
            {
                ScriptManager.RegisterStartupScript(Page, GetType(), "disp_confirm", "<script>HideLoader()</script>", false);
                CommonDB.WriteLog("Dt Count 3 : " + dtExcelData.Rows.Count, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());
                ScriptManager.RegisterStartupScript(this, GetType(), "showalert", "alert('No record found');", true);
            }

        }
        catch (Exception ex)
        {
            ScriptManager.RegisterStartupScript(Page, GetType(), "disp_confirm", "<script>HideLoader()</script>", false);
            string strErrorMsg = ex.Message.ToString() + " " + "StackTrace :" + ex.StackTrace.ToString();
            CommonDB.WriteLog("ERROR:" + strErrorMsg, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());
        }
    }


public static void DownloadReport(DataTable dtRecord)
    {
        try
        {
           
            string strFilename = string.Empty;
            
            using (XLWorkbook wb = new XLWorkbook())
            {
                CommonDB.WriteLog("Dt Count 3 : " + dtRecord.Rows.Count, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());
                wb.Worksheets.Add(dtRecord, "SheetName");
                strFilename = DateTime.Now.ToString();

                CommonDB.WriteLog("Dt Count 4 : " + dtRecord.Rows.Count, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());


                HttpContext.Current.Response.Clear();
                HttpContext.Current.Response.Buffer = true;
                HttpContext.Current.Response.Charset = "";
                HttpContext.Current.Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
                
                HttpContext.Current.Response.AddHeader("content-disposition", "attachment;filename=JIO_LOS_REPORT_"+ strFilename +".xlsx");
                CommonDB.WriteLog("Dt Count 5 : " + dtRecord.Rows.Count, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());
                
                using (MemoryStream MyMemoryStream = new MemoryStream())
                {
                    CommonDB.WriteLog("Dt Count 6 : " + dtRecord.Rows.Count, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());
                    wb.SaveAs(MyMemoryStream);
                    MyMemoryStream.WriteTo(HttpContext.Current.Response.OutputStream);
                    HttpContext.Current.Response.Flush();
                    HttpContext.Current.Response.End();
                    CommonDB.WriteLog("Dt Count 7 : " + dtRecord.Rows.Count, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());                       
                }
            }
        }
        catch (Exception ex)
        {   
            string strErrorMsg = ex.Message.ToString() + " " + "StackTrace :" + ex.StackTrace.ToString();
            CommonDB.WriteLog("ERROR:" + strErrorMsg, ConfigurationManager.AppSettings["AIRFIBER_LOG"].ToString());
        }
    }


optimize TIMESTAMPDIFF in mysql query

I need help optimizing a query for large data table when I test manually it shows fast , but in my slow query log it is logged as taken 10s+ for some reason .

SELECT  q.id, q.village_id, q.to_player_id, q.to_village_id,
        q.proc_type, TIMESTAMPDIFF(SECOND, NOW(),q.end_date) remainingTimeInSeconds
    FROM table
   

I expect to output the results that's time is ended , meaning time left must be 0 or less , in ASC order .

it order by the end time itself ,because when we have many attacks we must arrange them according to which attack suppose to arrive first ,then process them one by one



2023-10-21

Can't import Optuna

I am trying to import Optuna on Jupyter Notebook. I have previously installed pytorch-2.0.1. I have tried to downgrade pytorch as well as uninstalling and reinstalling Optuna as well as sqlalchemy. The error I am getting now when running import Optuna is:

"ImportError: cannot import name 'NOT_EXTENSION' from 'sqlalchemy.orm.interfaces"

Does anybody know what the error could be and if there is a fix to this?



2023-10-20

What is the best way to create user config at first user login?

In my quarkus application, I want to have some data inserted in my database for each user.

Database uses neo4j. As a consequence, I would like to have some code in my application which creates the various datas and send them in database.

Currently, I use the RolesAugmentor, as described in Security Tips and Tricks. Unfortunatly, as it is invoked for each request, we have multiple nodes generated for each new user. How can I have data inserted only once for each user ?

EDIT 1 More clearly, I have a RolesAugmentor class containing the following code

@ApplicationScoped
public class RolesAugmentor implements SecurityIdentityAugmentor {

    @Override
    public Uni<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) {
        return Uni.createFrom().item(build(identity));
        // Do 'return context.runBlocking(build(identity));'
        // if a blocking call is required to customize the identity
    }
    private Supplier<SecurityIdentity> build(SecurityIdentity identity) {
        if (identity.isAnonymous()) {
            return () -> identity;
        } else {
            // create a new builder and copy principal, attributes, credentials and roles
            // from the original identity
            QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(identity);
    
            JWTCallerPrincipal caller = (JWTCallerPrincipal) identity.getPrincipal();
            String email = caller.getClaim("email");
            if (!isUserAlreadyInDB(email)) {
                synchronized(RolesAugmentor.class) {
                    if (!isUserAlreadyInDB(email)) {
                        String name = caller.getClaim("given_name");
                        String famillyName = caller.getClaim("family_name");
                        addUserToDB(email, name, famillyName);
                    }
                }
            }
    
            // add custom role source here
            builder.addRoles(extractUserRoles(email));
    
            return builder::build;
        }
    }

And I'm 99% sure it's not the good place to add user creation code. Am I right? What is the correct way?



2023-10-19

Why I don't see nodes on the map?

I have a dataset that has longitude and latitude of subway stations. Here is an example what I get when I expand the results of MATCH (n) RETURN n; in Memgraph Lab.

{
   "id": 177,
   "labels": [
      "Station"
   ],
   "properties": {
      "latitude": 52.496161111,
      "longitude": 13.342702777,
      "name": "U-Bahnhof Viktoria-Luise-Platz"
   },
   "type": "node"
}

Query result - data:

Query result - data

I can see that Latitude and Longitude are defined for all nodes.

Graph schema:

Graph schema

I expected to get results shown on a map, not as a "circle" of nodes. How can I show my results on a map?

Graph results:

Graph results



Odoo Color one2many Treeview Line

can anyone help me get my case to run? I want to color the line of a many2one treeview by the value of a field inside the line.

  <field colspan="2" name="timesheet_ids" class="custom-field" widget="one2many"
                                   nolabel="1">

                                <tree editable="bottom"  >
                                    <field name="sequence" colors="red:is_line_nonsense == True"/>
                                    <field name="time_event_type"/>
                                    <field name="date" attrs="{'readonly': [('is_date_readonly', '=', True)]}"/>
                                    <field name="is_begin_readonly" invisible="1"/>
                                    <field name="is_end_readonly" invisible="1"/>
                                    <field name="is_date_readonly" invisible="1"/>
                                    <field name="is_line_nonsense" invisible="1"/>

                                    <field name="begin_time"
                                           attrs="{'readonly': [('is_begin_readonly', '=', True)]}"
                                           widget="float_time"/>
                                    <field name="end_time" attrs="{'readonly': [('is_end_readonly', '=', True)]}"
                                           widget="float_time"/>
                                    <field name="total_hours" widget="float_time" readonly="true"/>
                                </tree>
                            </field>

I want that the line bg gets red when the field is_line_nonsense = true.

Here is the JS code.

odoo.define('chn_events.highlight_rows', function (require) {
    "use strict";
 var ListRenderer = require('web.ListRenderer');
    ListRenderer.include({
        _renderRow: function (record, index) {
            var $row = this._super.apply(this, arguments);
            var color = record.data.color;
            if (color) {
                $row.css('background-color', color);
            }
            return $row;
        },
    });
});

but it doesn't work, the script is loaded but no color will be shown.

I also tried with a module. https://odoo-community.org/shop/colorize-field-in-tree-views-2814#attr=21700 That also didn't work but I think it's because the module is for version 15 and I have 16.

Does anyone have an idea for the solution?



2023-10-18

Modify the TableRelation property in Business Central

i'm trying to modify the TableRelation property of a standard field in BC, but without success. Despite all my tentatives, seems that my modification is not perceived by Business Central.

Do you have any possible solutions to this problem?

I've tried to substitute the entire standard logic of the property, but without success. I've tried to extend the property, adding some filters to the field using the TableExtension, but without success.



2023-10-17

Snapcraft python builds - how to get it to pack - path doesn’t exist error?

So I am newer to Snapcraft, however I have been having this issue, despite following examples and tutorials so I figured I would ask here.

Machine setup: So I am on a windows machine, running ubuntu in a VM to compile the snap. Sadly work has not given me a ubuntu dev machine. Our deployments are on linux machines (hence the need for a snap), but they gave me a windows machine.

Snap Problem: Have a file (main.py) that is normally launched as “python3 -m main” and we want to make it into a snap.

So far I have this as my snap:

name: asdf
version: '4.0.0'
summary: asdf
description: |
    qweradsf - blah-etc...
grade: devel
confinement: devmode
base: core22

parts:
  py-dep:
    plugin: python
    source: .
    python-packages:
      - pyserial
      - ftputil

apps:
  asdf:
    command: usr/bin/python -m main
    environment:
      PYTHONPATH: $SNAP/usr/lib/python3/dist-packages:${SNAP}/usr/local/lib

For the command part I have tried all the combinations below

command: usr/bin/python -m main
command: /usr/bin/python -m main
command: /bin/python -m main
(also tried the above eith /usr/lib/python path)
command: python -m main
command: python3 -m main

When it goes to pack the snap file, I will get the error that the first argument - usr, bin, python, python3 - path doesn’t exist. Also tried adding PYTHONPATH. Nothing seems to work.

How do I call this python code? Do I need a source? Examples I saw didn’t use a source for the plugin (just had the . ), so I have been leaving that part mostly alone.



UTL_HTTP making API call to fetch token , bad request

I am using utl_http for the first time, using documentation and online resources to build my code. I am trying to call an API that is returning a token. I have the API calls working using POSTMAN, but I am not able to get it working on PLSQL side using utl_http . I keep getting Bad request error or credential invalid error, but I know I have right credentials in my code same as POSTMAN.

Not able to figure what I am missing.

Here is the image from POSTMAN:

Headers

enter image description here

Body

enter image description here

Here is my code:

declare
    req utl_http.req;
    res utl_http.resp;
    
    l_lvc_content       varchar2(4000);
    
    buffer              varchar2(4000); 
    endLoop             boolean;
 
begin   

    -- making request
    begin
        --utl_http.set_persistent_conn_support(true, 30);
    
        utl_http.set_transfer_timeout(15);
        utl_http.set_detailed_excp_support(true);
        utl_http.set_wallet('file:/mywallet/wallet', 'MywalletPASS');
            
        req := utl_http.begin_request('https://myurl.com/api/oauth2/token', 'POST');
                      
        utl_http.set_header(req, 'Authorization', 'Basic QzkzZ0N6amVCbGlaNWlXdEF1dUVnemasaZFcEFpMXdzTE46TXFvdWxpcW85UExBbjM2Ug==');

        utl_http.set_header(req, 'Content-Type', 'application/x-www-form-urlencoded');
        
        l_lvc_content := 'grant_type=password&username=MyUserNAME&password=MyUserPASS#&channel=Mychannel';
        
        utl_http.set_header(req, 'Content-Length', nvl(length(l_lvc_content),0) );
        
        utl_http.write_text(req, l_lvc_content);      
                            
        res := utl_http.get_response(req);
    
    exception
        when utl_http.request_failed 
            then dbms_output.put_line('ERROR : Request Failed : ' || utl_http.get_detailed_sqlerrm );
                utl_http.end_response(res);
        when others
            then dbms_output.put_line('RESPONSE ERROR' || SQLERRM);
            utl_http.end_response(res);
    end;    
        
    dbms_output.put_line('RESPONSE Received');
    
    -- process the response from the HTTP call
    begin
      
        dbms_output.put_line('Reading the RESPONSE');
       
        dbms_output.put_line ('Status code: ' || res.status_code);

        dbms_output.put_line ('Reason : ' || res.reason_phrase);       
           
        loop
                exit when endLoop;
         
                begin
                        utl_http.read_line( res, buffer, true );
                                
                        if (buffer is not null) and length(buffer)>0 then
                                dbms_output.put_line(buffer);
                        end if;
                                
                exception when utl_http.END_OF_BODY then
                        endLoop := true;
                end;
         
        end loop;        
                    
        utl_http.end_response(res);
                    
        dbms_output.put_line('RESPONSE read complete');
    
    exception
        when utl_http.end_of_body 
            then utl_http.end_response(res);                
        when others
            then dbms_output.put_line('RESPONSE ERROR' || SQLERRM);
            utl_http.end_response(res);
    end;

exception
        when others
            then dbms_output.put_line('MAIN ERROR : '|| SQLERRM);
            utl_http.end_response(res);  

end;

I have validated the request against POSTMAN console raw output call request and matched it in PLSQL call request, but PLSQL side is always responding as Status code: 400 bad request error.

any idea what I am missing?



2023-10-16

On Running a docker container, It is exiting automatically

git repo : https://github.com/samAd0san/two-tier-flask-app.git

I have used the Dockerfile to build an image, then proceeded by running a flask app container.

docker run -d -p 5000:5000 --network=twotier 
           -e MYSQL_HOST=mysql 
           -e MYSQL_USER=admin 
           -e MYSQL_PASSWORD=admin 
           -e MYSQL_DB=myDb 
           --name=flaskapp flaskapp:latest

The problem occurs when I launch this command:

docker ps -a
2860157851d9   
flaskapp:latest   
"python app.py"

      5 minutes ago   Exited (1) 5 minutes ago

flaskapp

The container should run on executing the command so that I can deploy the flask app on the container and it is connected with mysql container.

Ultimately I'm not able to access the application.



2023-10-15

How to download xlsx file in laravel?

I use the library as a means of generating Excel documents. Only I need to either save the file to the root/storage/app/public/export/file.xlsx directory, or immediately download the file in the browser. Everything is implemented in Laravel 9. Tell me how to write the code correctly?

My code, now:

use PhpOffice\PhpSpreadsheet\Spreadsheet;
use PhpOffice\PhpSpreadsheet\Writer\Xlsx;
...
    $spreadsheet = new Spreadsheet();
    $activeWorksheet = $spreadsheet->getActiveSheet();
    $activeWorksheet->setCellValue('A1', 'Hello World !');
    $writer = new Xlsx($spreadsheet);
    $writer->save("file.xlsx");
        

The file is saved to a folder 'public'. Help me, please!



How investigate disk cache usage in Win32 application?

I have a workload similar to the following:

while True:
    data = get_data_from_network();
    filename = sha1(data);
    write_to_file(filename, data, data.size());

Occasionally I read back from the file, but it's not very common. Importantly, I get a lot of these network requests. It's not uncommon for me to a gigabyte of data out to the disk this way. So for the most part I'm effectively just streaming large volumes of data to the disk. There is this article from Raymond Chen where he advises the customer not to use the flag, because as Raymond puts it:

If the application reads back from the file, the read can be satisfied from the disk cache, avoiding the physical I/O entirely

But I'm not sure if this applies to me, because depending on the size of the cache, there's a pretty good chance that by the time I go to read that data again, it's already been pushed out by some other data.

I can bypass this with FILE_FLAG_NO_BUFFERING when I call CreateFile(), but before I just go and blindly do this, I'm wondering how can I investigate the impact of this from a performance point of view. I can just time my application, sure, but I'd like to go deeper.

For starters, how big even is the OS cache? Is it per-process, per-file, global? Is the size configurable? Can I query its size programatically via an API? Is there a way for me to investigate if it's being thrashed due to my workload? Is there a way to run my program and then determine how many disk reads were served from the memory cache as opposed to from the physical media?



2023-10-14

How to force `stat_poly_line()` to use a specific non-zero y-intercept?

Using stat_poly_line() from package 'ggpmisc', one can fit a polynomial to data by default using lm() as method. You can force the fit through zero with either: formula = y ~ x + 0 or formula = y ~ x - 1. I cannot force it through a specific non-zero y-intercept for my linear model. In this case, I need to force it through 5.05.

Note: I recognize linear models are rarely statistically useful when the y-intercept is forced, but in my case I believe it is fine.

Here is my data:

mydata <- structure(list(y = c(20.2, 29.74, 22.37, 24.51, 
37.2, 31.43, 43.05, 54.36, 65.44, 67.28, 46.02), x = c(0.422014140000002, 
1.09152966, 1.3195521, 3.54231348, 2.79431778, 3.40756002, 5.58845772, 
7.10762298, 9.70041246, 11.7199653, 15.89668266)), row.names = c(NA, 
-11L), class = c("tbl_df", "tbl", "data.frame"))

And here is a simplified version of my plot:

myplot <- ggplot(mydata, aes(x = x, y = y)) +
  stat_poly_line(se = FALSE, 
                 linetype = "dashed", 
                 na.rm = TRUE, 
                 formula = y ~ x + 0) +
  stat_poly_eq(use_label(c("eq", "R2", "adj.R2")), 
               na.rm = TRUE, 
               formula = y ~ x + 0) +
  geom_point(size = 2.5) 

The x-value of the variable is 0 but I tried using 5.05 in that place to represent a y-intercept at 5.05 for the linear model (the x + 0 comes from the packages guide for how to put parabola intercepts at 0). This approach does not work, nor does using it on the y side of the formula either.

I could use another package relatively quickly, but I feel like there is a simple solution I can implement here.

Any help?



Command CodeSign failed with a nonzero exit code Version 15

Hy After Sonoma install,(before that, no problem...) Xcode blocked with "Command CodeSign failed with a nonzero exit code". All the solutions proposed doesn't work but I went to Settings/Locations/Derived Data/ Advanced/Custom/ and modified default value « Custom ABSOLUTE» with « Relative to Workspace »: It works perfectly on simulator and with iPhone and iPad : Good luck ! Best Regards I hope I Help some people😃



2023-10-13

Django Superuser Permission Issue: "You don’t have permission to view or edit anything"

I'm encountering an issue in my Django project where I'm unable to access the admin site as a superuser. When I try to log in as a superuser, I receive the error message: "You don’t have permission to view or edit anything." I've followed the standard steps for creating a superuser and configuring my custom user model, but I can't figure out why this permission issue is occurring.

- Django administration:  
  Site Administration, You don’t have permission to view or edit anything.*  
  \---------------------------------*

*  
for fresh start*

- deleted migrations
- deleted db.sqlite3

--------------------------------

powershell:

>>Python manage.py makemigrations
Migrations for 'auth_app':
  auth_app\migrations\0001_initial.py
    - Create model CustomUser

>>Python manage.py migrate
 Operations to perform:
  Apply all migrations: admin, auth, auth_app, contenttypes, sessions
 Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0001_initial... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying auth.0010_alter_group_name_max_length... OK
  Applying auth.0011_update_proxy_permissions... OK
  Applying auth.0012_alter_user_first_name_max_length... OK
  Applying auth_app.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying sessions.0001_initial... OK

>> python manage.py createsuperuser
 Email: helloadmin@mail.com
 Name: helloname
 Password:
 Password (again):
 This password is too short. It must contain at least 8 characters.
 This password is too common.
 This password is entirely numeric.
 Bypass password validation and create user anyway? [y/N]: y
 Superuser created successfully.

>>python manage.py runserver
 Watching for file changes with StatReloader
 Performing system checks...
 System check identified no issues (0 silenced).
 October 12, 2023 - 20:31:50
 Django version 4.2.5, using settings 'projectlogin.settings'
 Starting development server at http://127.0.0.1:8000/
 Quit the server with CTRL-BREAK.

models.py file:

from django.contrib.auth.models import AbstractBaseUser, BaseUserManager, PermissionsMixin
from django.db import models

class CustomUserManager(BaseUserManager):
  def create_user(self, email, name, password=None):
    if not email:
      raise ValueError('User must have an email address')
    if not name:
      raise ValueError('User must have a name')
    email = self.normalize_email(email)

    user = self.model(
      email=email,
      name=name,
    )

    user.set_password(password)
    user.save(using=self._db)

    return user

  def create_superuser(self, email, password, name, **extra_fields):
    extra_fields.setdefault('is_staff', True)
    extra_fields.setdefault('is_superuser', True)
    extra_fields.setdefault('is_active', True)

    if extra_fields.get('is_staff') is not True:
      raise ValueError(_('Superuser must have is_staff=True.'))
    if extra_fields.get('is_superuser') is not True:
      raise ValueError(_('Superuser must have is_superuser=True.'))

    return self.create_user(email, name=name, password=password)


class CustomUser(AbstractBaseUser, PermissionsMixin):
  email = models.EmailField(unique=True)
  name = models.CharField(max_length=50)
  is_staff = models.BooleanField(default=True)
  is_active = models.BooleanField(default=True)
  is_superuser = models.BooleanField(default=False)

  REQUIRED_FIELDS = ['name']
  USERNAME_FIELD = 'email'
  objects = CustomUserManager()

  def __str__(self):
    return self.email

settings.py file:

from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent
SECRET_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
DEBUG = True

ALLOWED_HOSTS = []

AUTH_USER_MODEL = 'auth_app.CustomUser' # Custom User Model

INSTALLED_APPS = [
    
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'auth_app.apps.AuthAppConfig',
]

MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'django.contrib.sessions.middleware.SessionMiddleware',
    'django.middleware.common.CommonMiddleware',
    'django.middleware.csrf.CsrfViewMiddleware',
    'django.contrib.auth.middleware.AuthenticationMiddleware',
    'django.contrib.messages.middleware.MessageMiddleware',
    'django.middleware.clickjacking.XFrameOptionsMiddleware',
]

ROOT_URLCONF = 'projectlogin.urls'
TEMPLATES = [
    {
        'BACKEND': 'django.template.backends.django.DjangoTemplates',
        'DIRS': ["templates"],
        'APP_DIRS': True,
        'OPTIONS': {
            'context_processors': [
                'django.template.context_processors.debug',
                'django.template.context_processors.request',
                'django.contrib.auth.context_processors.auth',
                'django.contrib.messages.context_processors.messages',
            ],
        },
    },
]
WSGI_APPLICATION = 'projectlogin.wsgi.application'
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': BASE_DIR / 'db.sqlite3',
    }
}
AUTH_PASSWORD_VALIDATORS = [
    {
        'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
    },
    {
        'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
    },
    {
        'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
    },
    {
        'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
    },
]

MESSAGE_STORAGE = 'django.contrib.messages.storage.session.SessionStorage'
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
STATIC_URL = 'static/'
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'

urls.py file:


from django.contrib import admin
from django.urls import path, include


urlpatterns = [
    path('admin/', admin.site.urls),
    path('', include('auth_app.urls')),  # Include your app's URL patterns here
    
]


2023-10-12

GitHub shows "Processing updates" on my Pull Request after every push

Previously, every git push to an open Pull Request was shown immediately. It might or might not have triggered CI, but the commit showed up in the "Commits" tab right away.

Now, every time when I push another commit to the Pull Request branch, it doesn't show up immediately, and instead the "Processing updates" loading indicator appears:

the "Processing updates" loading indicator

This is worrying, because the GitHub interface still allows the Pull Request to be merged, even though some commits are not processed yet.

I haven't changed any configuration though, neither in the repo, nor in my profile.

What happened? Why the change?

The "external link" icon refers to the page More details provided when a pull request is merged indirectly or is still processing updates. The page contains a paragraph Pushed commits are still being processed, which might be relevant, but it doesn't have a lot of info:

Help article paragraph screenshot

Specifically, it doesn't explain why it now "takes longer than usual for [new commits] to be processed and appear in the commit list".



2023-10-11

String comparison version single character comparison performance

In a c++ exercise website, I found out that the following code

int finalValueAfterOperations(vector<string>& operations) 
{
    int result = 0;
    for(int i = 0; i < operations.size(); ++i)
    {
        operations[i] == "++X" || operations[i] == "X++"?  ++result : --result;
    }

    return result;
}

is faster than

int finalValueAfterOperations(vector<string>& operations) 
{
    int result = 0;
    for(int i = 0; i < operations.size(); ++i)
    {
        operations[i][1] == '+'? ++result : --result;
    }

    return result;
}

What may be the underlying reasons for this?

I would assume that my single character checking would be more efficient than the full string comparison. I guess the comparison implementation has some optimization happening under the hood, but I am not sure what happens. Does someone have any idea?



How to define in Julia ReinforcementLearning.jl an action space that changes with each state?

I want to implement an environment in Julia ReinforcementLearning.jl that has continuous action space that change as a function of the state. The state is a positive integer n <= nmax for a given nmax. The action space is the n-dimensional vector space [0, 1]^n, that is, an action is a vector of size n that has elements in [0,1].

What I implemented is the RLBase.action_space(env::MyEnv) which is essentially

using ReinforcementLearning
using IntervalSets # for ClosedInterval


RLBase.action_space(env::MyEnv) = Space(ClosedInterval{Float32}[0..1 for _ in 1:state(env)]) 
# state(env) is an integer between 1 and nmax.

I think this implementation is not complete because in the documentation of ReinforcementLearning.jl, it is mentioned that legal_action_space and legal_action _space_mask should be implemented when ActionStyle is FULL_ACTION_SET.

How should I implemented legal_action_space and legal_action_space_mask and should I use ActionTransformedEnv when defining my environment?



multidimensional vectors of maps progressively slower after multiple initializations

If I allocate a multidimensional vectors of maps multiple times it get slower and slower. If I try a multidimensional vector of <pairs> then that is ok with each iteration getting the same performance? Why are maps or multimaps different?

void
myReserve(vector<vector<vector<vector<vector<map<int, int>>>>>> myInts, int a, int b, int c, int d, int e)
{
    clock_t begin = clock();
    myInts.reserve(a);
    myInts.resize(a);
    for (int aa = 1; aa < a; aa++)
    {
        myInts[aa].reserve(b);
        myInts[aa].resize(b);
        for (int bb = 1; bb < b; bb++)
        {
            myInts[aa][bb].reserve(c);
            myInts[aa][bb].resize(c);
            for (int cc = 1; cc < c; cc++)
            {
                myInts[aa][bb][cc].reserve(d);
                myInts[aa][bb][cc].resize(d);
                for (int dd = 1; dd < d; dd++)
                {
                    myInts[aa][bb][cc][dd].reserve(e);
                    myInts[aa][bb][cc][dd].resize(e);
                }
            }
        }
    }
    clock_t end = clock();
    double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
    printf("  ->  elapse time for building is %.4f \n", elapsed_secs);
}
int a = 100, b = 100, c = 30, d = 40, e = 20;
vector<vector<vector<vector<vector<map<int, int>>>>>> myInts;

for (int loop = 1; loop <= 10; loop++)
{
    printf("->  starting loop #%d \n", loop);
    myReserve(myInts, a, b, c, d, e);
}

starting loop #1 elapse time for building is 7.8750 starting loop #2 elapse time for building is 22.7520 starting loop #3 elapse time for building is 20.7190 starting loop #4 elapse time for building is 23.2740 starting loop #5 elapse time for building is 25.5170 starting loop #6 elapse time for building is 27.8460 starting loop #7 elapse time for building is 32.5260 starting loop #8 elapse time for building is 37.5980 starting loop #9 elapse time for building is 44.4400 starting loop #10 elapse time for building is 47.0500



Which atom (box) has subtitle data in MP4 (ISOBMFF)

In ISOBMFF (MP4), what atom (box) has subtitle information ? I have a MP4 File which has subtitle. Using FFMPEG, I input the subtitle in video.

ffmpeg -i input.mp4 -vf "subtitles=subtitle.srt" -c:v libx264 -c:a aac -strict experimental -b:a 192k output.mp4

here is my original mp4 file dump. (Result of Mp4dump)

[ftyp] size=8+16
  major_brand = mp42
  minor_version = 0
  compatible_brand = isom
  compatible_brand = mp42
[moov] size=8+373326
  [mvhd] size=12+96
    timescale = 1000
    duration = 1187027
    duration(ms) = 1187027
  [trak] size=8+157859
    [tkhd] size=12+80, flags=3
      enabled = 1
      id = 1
      duration = 1187019
      width = 1280.000000
      height = 720.000000
    [mdia] size=8+157759
      [mdhd] size=12+20
        timescale = 30000
        duration = 35610575
        duration(ms) = 1187019
        language = und
      [hdlr] size=12+59
        handler_type = vide
        handler_name = ISO Media file produced by Google Inc.
      [minf] size=8+157648
        [dinf] size=8+28
          [dref] size=12+16
            [url ] size=12+0, flags=1
              location = [local to file]
        [stbl] size=8+157584
          [stsd] size=12+140
            entry_count = 1
            [avc1] size=8+128
              data_reference_index = 1
              width = 1280
              height = 720
              compressor =
              [avcC] size=8+42
                Configuration Version = 1
                Profile = High
                Profile Compatibility = 0
                Level = 31
                NALU Length Size = 4
                Sequence Parameter = [67 64 00 1f ac b4 02 80 2d d8 0b 50 10 10 14 00 00 0f a4 00 03 a9 80 3c 60 ca 80]
                Picture Parameter = [68 ee 3c b0]
          [stts] size=12+12
            entry_count = 1
          [stsc] size=12+1672
            entry_count = 139
          [stco] size=12+10228
            entry_count = 2556
          [stsz] size=12+142308
            sample_size = 0
            sample_count = 35575
          [stss] size=12+3152
            entry_count = 787
        [vmhd] size=12+8, flags=1
          graphics_mode = 0
          op_color = 0000,0000,0000
  [trak] size=8+215196
    [tkhd] size=12+80, flags=3
      enabled = 1
      id = 2
      duration = 1187027
      width = 0.000000
      height = 0.000000
    [mdia] size=8+215096
      [mdhd] size=12+20
        timescale = 44100
        duration = 52347904
        duration(ms) = 1187027
        language = eng
      [hdlr] size=12+59
        handler_type = soun
        handler_name = ISO Media file produced by Google Inc.
      [minf] size=8+214985
        [dinf] size=8+28
          [dref] size=12+16
            [url ] size=12+0, flags=1
              location = [local to file]
        [stbl] size=8+214925
          [stsd] size=12+93
            entry_count = 1
            [mp4a] size=8+81
              data_reference_index = 1
              channel_count = 2
              sample_size = 16
              sample_rate = 44100
              [esds] size=12+41
                [ESDescriptor] size=2+39
                  es_id = 2
                  stream_priority = 0
                  [DecoderConfig] size=2+31
                    stream_type = 5
                    object_type = 64
                    up_stream = 0
                    buffer_size = 0
                    max_bitrate = 0
                    avg_bitrate = 0
                    DecoderSpecificInfo = 12 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00
                  [Descriptor:06] size=2+1
          [stts] size=12+12
            entry_count = 1
          [stsc] size=12+40
            entry_count = 3
          [stco] size=12+10228
            entry_count = 2556
          [stsz] size=12+204492
            sample_size = 0
            sample_count = 51121
        [smhd] size=12+4
          balance = 0
  [udta] size=8+139
    [meta] size=12+127
      [hdlr] size=12+21
        handler_type = mdir
        handler_name =
      [ilst] size=8+86
        [.too] size=8+22
          [data] size=8+14
            type = 1
            lang = 0
            value = Google
        [gsst] size=8+17
        [gstd] size=8+23
[mdat] size=8+289495494

here is my new mp4 file (including subtitle) (Result of Mp4dump)

[ftyp] size=8+24
  major_brand = isom
  minor_version = 200
  compatible_brand = isom
  compatible_brand = iso2
  compatible_brand = avc1
  compatible_brand = mp41
[free] size=8+0
[mdat] size=8+289904585
[moov] size=8+1267713
  [mvhd] size=12+96
    timescale = 1000
    duration = 1187028
    duration(ms) = 1187028
  [trak] size=8+547052
    [tkhd] size=12+80, flags=3
      enabled = 1
      id = 1
      duration = 1187020
      width = 1280.000000
      height = 720.000000
    [edts] size=8+28
      [elst] size=12+16
        entry_count = 1
        entry/segment duration = 1187020
        entry/media time = 2002
        entry/media rate = 1
    [mdia] size=8+546916
      [mdhd] size=12+20
        timescale = 30000
        duration = 35610575
        duration(ms) = 1187019
        language = und
      [hdlr] size=12+59
        handler_type = vide
        handler_name = ISO Media file produced by Google Inc.
      [minf] size=8+546805
        [vmhd] size=12+8, flags=1
          graphics_mode = 0
          op_color = 0000,0000,0000
        [dinf] size=8+28
          [dref] size=12+16
            [url ] size=12+0, flags=1
              location = [local to file]
        [stbl] size=8+546741
          [stsd] size=12+201
            entry_count = 1
            [avc1] size=8+189
              data_reference_index = 1
              width = 1280
              height = 720
              compressor = Lavc59.37.100 libx264
              [avcC] size=8+48
                Configuration Version = 1
                Profile = High
                Profile Compatibility = 0
                Level = 31
                NALU Length Size = 4
                Sequence Parameter = [67 64 00 1f ac d9 40 50 05 bb 01 6a 02 02 02 80 00 01 f4 80 00 75 30 07 8c 18 cb]
                Picture Parameter = [68 eb e3 cb 22 c0]
              [colr] size=8+11
              [pasp] size=8+8
              [btrt] size=8+12
          [stts] size=12+12
            entry_count = 1
          [stss] size=12+2036
            entry_count = 508
          [ctts] size=12+259772
            entry_count = 32471
          [stsc] size=12+28
            entry_count = 2
          [stsz] size=12+142308
            sample_size = 0
            sample_count = 35575
          [stco] size=12+142300
            entry_count = 35574
  [trak] size=8+720439
    [tkhd] size=12+80, flags=3
      enabled = 1
      id = 2
      duration = 1187028
      width = 0.000000
      height = 0.000000
    [edts] size=8+28
      [elst] size=12+16
        entry_count = 1
        entry/segment duration = 1187027
        entry/media time = 1024
        entry/media rate = 1
    [mdia] size=8+720303
      [mdhd] size=12+20
        timescale = 44100
        duration = 52348928
        duration(ms) = 1187050
        language = eng
      [hdlr] size=12+59
        handler_type = soun
        handler_name = ISO Media file produced by Google Inc.
      [minf] size=8+720192
        [smhd] size=12+4
          balance = 0
        [dinf] size=8+28
          [dref] size=12+16
            [url ] size=12+0, flags=1
              location = [local to file]
        [stbl] size=8+720132
          [stsd] size=12+114
            entry_count = 1
            [mp4a] size=8+102
              data_reference_index = 1
              channel_count = 2
              sample_size = 16
              sample_rate = 44100
              [esds] size=12+42
                [ESDescriptor] size=5+37
                  es_id = 2
                  stream_priority = 0
                  [DecoderConfig] size=5+23
                    stream_type = 5
                    object_type = 64
                    up_stream = 0
                    buffer_size = 0
                    max_bitrate = 196932
                    avg_bitrate = 196932
                    DecoderSpecificInfo = 12 10 56 e5 00
                  [Descriptor:06] size=5+1
              [btrt] size=8+12
          [stts] size=12+12
            entry_count = 1
          [stsc] size=12+373096
            entry_count = 31091
          [stsz] size=12+204496
            sample_size = 0
            sample_count = 51122
          [stco] size=12+142300
            entry_count = 35574
          [sgpd] size=12+14, version=1
            grouping_type = roll
            default_length = 2
            entry_count = 1
            entries:
              (       0) [ff ff]
          [sbgp] size=12+16
            grouping_type = roll
            entry_count = 1
  [udta] size=8+90
    [meta] size=12+78
      [hdlr] size=12+21
        handler_type = mdir
        handler_name =
      [ilst] size=8+37
        [.too] size=8+29
          [data] size=8+21
            type = 1
            lang = 0
            value = Lavf59.27.100

I do not find diffrence of input.mp4 file and output.mp4 file. (Definitely Subtitle is applied well.)



How to push a named value to a vector in R's cpp11?

Within R's a package cpp11, a named value is pushed to a list as follows:

writable::list a;

a.push_back("my_name"_nm = 1);

How to do the same if "my_name" was stored in the string variable?



Java recursive entity search

I have a simple recursive method which intends to find the timestamp of an entity by the given Id. If there is no such Id, the result will be null so I'm gonna get a NPE, thus the catch block comes in, finds the next available Id and calls this function again with the new Id.

This works well so far, when the function is called again with the next available Id, it finds the entity, but when it tries to return the timestamp, myEntity becomes null and its a NPE again. I'm debugging in IntelliJ and stop the code just before this and myEntity does have an actual value here, before the return statement.

From this point the code just hangs and nothing visible happens. I guess there is something under the hood regarding to recursion, but couldn't figure it out yet.

Thanks in advance!

public LocalDateTime getInsertTimeById(Long id) {
        try {
            MyEntity myEntity = entityManager.find(MyEntity .class, id);
            return myEntity.getInsertTime();
        } catch (Exception e) {
            Long nextAvailableId = findNextAvailableId(id);
            return getInsertTimeById(nextAvailableId);
        }
    }

private Long findNextAvailableId(Long id) {
        CriteriaBuilder cb = entityManager.getCriteriaBuilder();
        CriteriaQuery<Long> cq = cb.createQuery(Long.class);
        Root<MyEntity> root = cq.from(MyEntity.class);

        cq.select(cb.min(root.get("id"))).where(cb.gt(root.get("id"), id));

        TypedQuery<Long> query = entityManager.createQuery(cq);
        return query.getSingleResult();
    }

PS: I know that eventually the findNextAvailableId() won't find an Id, but handling of that is irrelevant at this point.

UPDATE: I couldn't find the problem, had to come up with an other solution which is as the commenters state preferable anyways:

public LocalDateTime getInsertTimeById(Long mindId, Long maxId) {
        LocalDateTime insertTime = getInsertTimeByIdQuery(mindId);
        if (insertTime != null) {
            return insertTime;
        }
        Long nextAvailableId = findNextAvailableId(mindId, maxId);
        if (nextAvailableId == null) {
            return null;
        }
        return getInsertTimeByIdQuery(nextAvailableId);
    }


2023-10-10

Dataproc serverless does not seem to make use of spark property to connect to external hive metastore

I have a GCP postgres instance that serves as an external hive metastore for a Dataproc cluster. I would like to be able to utilize this metastore for Dataproc serverless jobs. Experimenting with serverless and by following documentation, I am already able to:

  • leverage the service account, subnetwork URI to access project resources
  • connect to PHS associated with the Dataproc cluster
  • build and push a custom image to container registry to be pulled by spark jobs

I thought the spark property "spark.hadoop.hive.metastore.uris" would allow serverless spark jobs to connect to the thrift server used by the Dataproc cluster, but it does not seem to even try to make the connection and instead errors with:

Required table missing : "DBS" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.schema.autoCreateTables"

The non-serverless Dataproc spark jobs log:

INFO hive.metastore: Trying to connect to metastore with URI thrift://cluster-master-node:9083

as it successfully makes connection.



2023-10-09

How to keep an empty database the database docker container is started again

There is the following code in my docker-compose.yml

mariadb:
  image: mariadb:10.8.2
  restart: always
  environment:
    MARIADB_ROOT_PASSWORD: test
    MARIADB_DATABASE: mydb
  ports:
  -  "3306:3306"
  container_name: mariadb
  networks:
  - my-network

then I used the following command to start a Docker container

docker stop mariadb
docker rm mariadb
docker-compose up -d --remove-orphans mariadb
sleep 15 # wait until database is fully started
docker exec mariadb mysql -u root -ptest -e "CREATE DATABASE product;"

After starting the Docker container, i.e. the command docker-compose up -d --remove-orphans mariadb, the next command is to create a database called product. However, problem is that there is the following error ERROR 1007 (HY000) at line 1: Can't create database 'dictionary'; database exists

Question: Why does the previously created database still exists even though the container and image are removed at the beginning? How to create the container for the second time without an existing database?



2023-10-08

I get an error on a locally-run Laravel 5 app [closed]

Basically the error was after I copied a project from the hosting cPanel to study locally, when I ran it using localhost, the web URL instead used https when calling everything like JavaScript and CSS, it caused an error where data was not called because it should be http, not http.

I have tried various methods such as changing the env but it still doesn't work. What can I try next?

I am doing an internship.



When click drawer widget rebuild and value variable change to null

I'm using Riverpod and I don't understand why when I click on the drawer, the widget rebuilds and the variable in my SearchNotifier class becomes null.

Here is my Provider

class SearchNotifier extends StateNotifier<SearchResult> {
  SearchResult? item = SearchResult();

  String? authToken;

  SearchNotifier() : super(SearchResult());

  SearchResult get getSearchResult {
    if (this.item != null)
      return this.item!;
    else
      return SearchResult();
  }

  void setToken(String token) {
    this.authToken = token;
  }

  Future<void> doSearch(SearchObject searchObject) async {
    var logger = Logger();
    //logger.d(searchObject.page);
    final url = Uri.parse(Constants.API_SEARCH);
    try {
      final response = await http.post(
        url,
        body: json.encode(searchObject.toJson()),
        headers: Constants.headerWithAuth(this.authToken!),
      );
      final responseData = json.decode(response.body.toString());
      //logger.d(responseData['foods'][0]['images'][0]);
      if (responseData['response_code'] != "0") {
        throw HttpException(responseData['response_message']);
      }
      SearchResult searchResult = SearchResult.fromJson(responseData);
      if (searchResult.current_page != null && searchResult.current_page! > 1) {
        state.copyWith(
          current_page: searchResult.current_page,
          first_page: searchResult.first_page,
          last_page: searchResult.last_page,
          next_page: searchResult.next_page,
          out_of_range: searchResult.out_of_range,
          prev_page: searchResult.prev_page,
          total_pages: searchResult.total_pages,
        );
        state.foods = [...state.foods!, ...searchResult.foods!];
      } else {
        state = searchResult;
      }
    } catch (error) {
      throw error;
    }
  }
}

final searchProvider =
    StateNotifierProvider<SearchNotifier, SearchResult>((ref) {
  final searchNotifier = SearchNotifier();
  searchNotifier.setToken(ref.watch(authProvider).token!);
  return searchNotifier;
});

here is my a part of Screen code:

class SearchScreen extends ConsumerStatefulWidget {
  const SearchScreen({
    super.key,
    required this.lat,
    required this.lng,
  });

  static const routeName = '/search';

  final String lat;
  final String lng;

  @override
  ConsumerState<SearchScreen> createState() => _SearchScreenState();
}

class _SearchScreenState extends ConsumerState<SearchScreen> {
  var _isInit = true;
  var _isFirstLoading = true;
  var _isLoadMore = false;
  var _isLastPage = false;
  var _currentPage = 0;
  var _nextPage;
  var _listViewController = ScrollController();
  var _gridViewController = ScrollController();
  var _textSearchController = TextEditingController();
  bool viewType_IsListView = true;
  @override
  void didChangeDependencies() {
    if (_isInit) {
      _listViewController = ScrollController()..addListener(loadMore);
      _gridViewController = ScrollController()..addListener(loadMoreGridView);
    }
    _isInit = false;
    super.didChangeDependencies();
  }

  @override
  void initState() {
    // TODO: implement initState
    getData();
    super.initState();
  }

  @override
  void dispose() {
    // TODO: implement dispose
    _listViewController.removeListener(loadMore);
    _gridViewController.removeListener(loadMoreGridView);
    super.dispose();
  }

  void loadMore() {
    if (_isLastPage == false &&
        _nextPage != null &&
        _isLoadMore == false &&
        _listViewController.position.extentAfter < 300) {
      getData(_nextPage);
    }
  }

  void loadMoreGridView() {
    if (_isLastPage == false &&
        _nextPage != null &&
        _isLoadMore == false &&
        _gridViewController.position.extentAfter < 300) {
      getData(_nextPage);
    }
  }

  void getData([int page = 0]) {
    setState(() {
      _isLoadMore = true;
      if (page > 0) {
        _isLoadMore = true;
      }
    });
    DateTime now = DateTime.now();
    String formattedDateTime =
        DateFormat('dd-MM-yyyy HH:mm \'SGT\'').format(now);
    SearchObject searchObject = SearchObject(
      service_type: null,
      booking_date_time: null,
      query: _textSearchController.text,
      page: page,
      filters: null,
      sort_by: null,
      // current do not pass lat lng for test
      // location_latitude: double.tryParse(widget.lat),
      // location_longitude: double.tryParse(widget.lng),
    );
    // new SearchObjectFilters(
    //     price_range: '0-999',
    //     quantity: 0,
    //     meal_type: [],
    //     amenities: [],
    //     cuisines: [],
    //     dietary_needs: [],
    //   )

    // print('search ' + widget.lat);
    // print('search ' + widget.lng);

    ref.read(searchProvider.notifier).doSearch(searchObject).then((_) {
      setState(() {
        _isFirstLoading = false;
        _isLoadMore = false;
      });
    });
  }

  void getEviroment() async {
    await ref.read(environmentVariablesProvider.notifier).getPrefs();
  }

  void changeViewType(bool viewType) {
    setState(() {
      viewType_IsListView = viewType;
    });
  }

  @override
  Widget build(BuildContext context) {
    var searchResults = ref.watch(searchProvider);
    print('Run here!');
    print(searchResults.foods);
    if (searchResults.last_page != null && searchResults.current_page != null) {
      setState(() {
        _isLastPage = searchResults.last_page!;
        _currentPage = searchResults.current_page!;
        _nextPage = searchResults.next_page;
      });
    } else {
      setState(() {
        _isLastPage = false;
        if (searchResults.current_page != null) {
          _currentPage = searchResults.current_page!;
        }
        _nextPage = searchResults.next_page;
        if (_nextPage == null) {
          _isLastPage = true;
        }
      });
    }
    return Scaffold(
      drawer: const MainDrawer(),
      appBar: AppBar(

Here will be null after click drawer. First time load screen I have a data.

Null here

Why is it null?